This glossary has been produced through the combined efforts of the members of the University Assessment Council (UAC).
The UAC recognizes that one of the limitations of a university-wide glossary is that use and application of the terms below will vary, particularly among various specialized accrediting agencies; this glossary is not intended to replace the terminology used by those accrediting agencies or the programs they accredit. The glossary is intended to promote and facilitate understanding of assessment practices and policies within systematic processes like program review and at university functions and committees where membership is likely to be multidisciplinary as well as from academic and co-curricular units. As its primary purpose is to aid in discourse, the glossary is viewed as a dynamic document, open to further additions and revisions from the university assessment community.
-
Accountability
In program review, the use of results of assessment and data on program activity, viability, and adequacy for program continuance / discontinuance; the public reporting of student, program, or institutional data to justify decisions or policies; using results to determining funding.
-
Action Plan
A formal explanation of the administrative, curricular, functional, operational, instructional, or pedagogical actions being taken to address strengths, weaknesses, opportunities, and threats revealed by the assessment data gathered, either that year or longitudinally.
-
Actionable Results
Results from assessment and/or other related data streams that converge in a meaningful way that leads to a clear, appropriate, feasible and manageable response.
-
Analytic Scoring
Evaluating student work across multiple dimensions (temporal, task based) of performance rather than from an overall impression (holistic scoring). In analytic scoring, individual scores for each dimension are scored and reported.
-
Assessment Cycle
A two-part description of the process of assessment. First, a yearly process of assessment at a particular level (degree, program, university) with a calendar of assigned dates for completion, review, and other task performance. It typically includes the following stages: Planning: where reviews and revisions of the assessment plan, curriculum, and other academic functions are performed in light of previous assessment data; Assessment: collection of data that is evidence of student performance in relation to goals such as student learning outcomes; Analysis: where evidence is analyzed, disseminated, discussed among the faculty and other key stakeholders, and the assessment report is generated. Action: often called ‘closing the loop,’ results of the assessment report are disseminated, particularly focusing on the resulting action plans as well as the implementation of those same plans. Second, a long-term planning cycle that shows how a particular level (degree, program, university) seeks to accomplish the assessment of all its designated units and their outcomes.
-
Assessment Plan
A primarily static document that defines a program’s Mission with respect to assessment, its program learning goals, its program learning outcomes, a curriculum map that identifies when formative and summative assessments take place, and the measures used and identification of when students are introduced, practicing, and achieving the learning outcomes. It includes descriptions of the assessment cycle and can also include an explanation of the process of data collection, archi val, and analysis.
-
Assessment Report
A dynamic yearly document that records faculty discussion of assessment results, delineates what the resulting action plans are (if any), and provides supporting data as attachments or appendices.
-
Assurance of Learning
An outcomes-based approach to assessment that is driven by accrediting standards of the AACSB of accountability and continuous improvement. Primarily supported by direct assessment, programs and colleges are expected to use assessment to improve curricula when deficiencies or opportunities for improvement are found.
-
Authentic Assessment/Embedded Assessment
An assessment that measures a student's performance on tasks and situations that occur in real life. This type of assessment is closely aligned with, and models, what students do in the classroom.
-
Benchmark
A detailed description of a specific level of student performance expected of students at particular stages. Benchmarks are often represented by samples of student work. A set of benchmarks can be used as "checkpoints" to monitor progress toward meeting performance goals within and across student levels.
-
Capstone Course
A summative course, project, or experience that provides an opportunity for the demonstration of mastery of the learning outcomes of an entire sequence of study in a given program.
-
Closing the Loop
Assessment terminology for communicating the results of outcomes assessment, assessment analysis, and resulting actions back to the key stakeholders in the assessment process, typically the faculty who performed the assessment; also a stand-in for the process of generating and carrying out assessment related action plans.
-
Community-based Assessment
Assessment of vocational skills carried out within job placements and by practicing professionals directly relevant to the student’s program. Assessment typically focuses on evaluating the students professionalism, work habits, skills/competencies, and aptitudes.
-
Competency
The range of possible, specific skills and behaviors that a student must be able to perform or demonstrate mastery of to satisfy a particular learning outcome or to graduate from a particular program. For any particular learning outcome, there may be a number of demonstrable competencies that are associated with it.
-
Criterion-referenced Assessment
An assessment where an individual's performance is compared to a specific learning objective or performance standard and not to the performance of other students. Criterion-referenced assessment tells us how well students are performing on specific goals or standards rather than just telling how their performance compares to a norm group of students nationally or locally. In criterion-referenced assessments, it is possible that none, or all, of the examinees will reach a particular goal or performance standard.
-
Curriculum
Mapping The process of evaluating and graphically representing curriculum and program learning outcomes to ensure that students are receiving appropriate learning opportunities to be introduced, practice, and demonstrate mastery of the learning outcomes. Also allows programs to identify what assessments are taking place and in which courses. Curriculum maps identify the connections between course, learning level, assessment level (formative or summative), and assessment measure and can be used alongside assessment cycles to determine the frequency and location of assessment.
-
Direct Measurement
Measures that require the student to demonstrate his/her knowledge and skills in response to the instrument. Examples of direct measurement include 1) achievement tests such as objective tests; 2) student academic work such as essays, presentations, portfolios, and course assignments; 3) observations or case studies.
-
Evaluation
When used for most educational settings, evaluation means to measure, compare, and judge the quality of student work, schools, or a specific educational program; assessment is one form of evaluation.
-
Experiential Learning
An approach to education that emphasizes learning via experience (learning by doing) coupled with timely reflection on the process and the results of that experience. Experiential learning is a cyclical process where the experience leads to reflection which leads to alteration or improvement to the process which governs the experience itself, stressing the continuous improvement and lifelong learning of the student.
-
Formative Assessment
The gathering of information about student learning during the early progression of a course or program to improve the learning of those students. Formative assessments are also used to determine the amount of change (the delta) in learning that has occurred during a course or program. Example: reading the first lab reports of a class to assess whether some or all students in the group need a lesson on how to make them succinct and informative.
- Indirect Measurement
Measures that ask students, past or present, faculty, employers, or others stakeholders to reflect on student learning rather than actively demonstrating it. Examples of indirect measurement include self- report methods such as surveys, interviews, and focus groups.
-
Inter-professional Education
An approach to education in which students from two or more professions learn about, from, and with one another to promote team building, communication, and collaboration as well as improve health outcomes (both for the students in terms of learning and, ultimately, the communities they will serve). Particularly focused on improving the student’s ability to function as an effective practitioner and member of a professionally diverse team.
-
Learning Goals
Broad, general program and institutional level statements that inform students about the academic purpose or mission of a program or institution as well as the expectations of its faculty.
-
Learning Objectives
Sometimes used interchangeably with outcomes. Like outcomes, objectives are measurable, quantifiable operational statements that describe specific student behaviors which are evidence of the acquisition of knowledge, skills, abilities, capacities, attitudes or dispositions. Objectives typically are acquired in shorter temporal span than outcomes and are thus used most often to describe learning occurring at a course level whereas outcomes describe learning that occurs at a program level.
-
Learning Outcomes
Statements describing specific student behaviors that evidence the acquisition of desired knowledge, skills, abilities, capacities, attitudes or dispositions; learning outcomes are measurable and quantifiable. Learning outcomes can be usefully thought of as behavioral criteria for determining whether students are achieving the educational objectives of a degree, and, ultimately, whether overall program goals are being successfully met.
-
Level of Learning
The ability to distinguish between tasks and expectations of learning that require different levels of cognitive complexity and to match assessment measures to those levels of learning. This is a relevant practice for individual faculty in course design as well as for programs in determining their assessment plans and curriculum maps. Bloom’s Taxonomy has been adopted by West Virginia University as its model for determining cognitive complexity and for use in crafting learning outcomes.
-
Measure
Any particular task occurring within the context of a course or program or in standardized settings that allows for the quantifiable measurement of student performance towards a learning outcome. Common measures include tests, essays, projects, portfolios, etc.
-
Measurement
The process of quantifying any human attribute pertinent to education without necessarily making judgments or interpretations.
-
Metacognition
An individual's ability to think about his/her own thinking and to monitor his/her own learning. Metacognition is integral to a learner's ability to actively partner in his or her own learning and facilitates transfer of learning to other contexts.
-
Metric
A scoring mechanism (like a rubric or Likert scale) that applies a quantitative scale to student performance towards a particular learning outcomes.
-
Norm-referenced Assessment/Standardized Assessment
An assessment where student performance or performances are compared to a larger group. Usually the larger group or "norm group" is an institutional, regional, peer, or national sample representing a wide and diverse cross-section of students. The purpose of a norm-referenced assessment is usually to sort students and not to measure achievement towards some criterion of performance.
-
Operational Goals and Outcomes
In contrast to learning outcomes and goals which are solely centered on measurable, demonstrable student learning, operational outcomes are internal measures of a department’s, program’s, or unit’s operational success and viability; these are entirely separate from the development and assessment of learning outcomes. Operational outcomes are often traditional student achievement measures like retention, persistence, and completion, include enrollment, transfer (in and out), grade performance, job placement, benchmarking, resource evaluation, budget performance, etc.
-
Peer-assessment
Evaluation of learning by one's peers.
-
Performance-based Assessment
An assessment technique involving the gathering of data though systematic observation of a student behavior or process and evaluating that data based on a clearly articulated set of criteria (rubric) to serve as the basis for evaluative judgments.
-
Portfolio Assessment
A portfolio is collection of work, usually drawn from students' classroom work. A portfolio becomes a portfolio assessment when (1) the assessment purpose is defined; (2) criteria are made clear for determining what is contained in the portfolio, by whom, and when; and (3) criteria for assessing either the collection or individual pieces of work are identified and used to make judgments about learning. Portfolios can be designed to assess student progress, effort, and/or achievement, and encourage students to reflect on their learning.
-
Program Goals
A term that has been discontinued for use at WVU because of its ambiguity. It has been replaced by “Learning Goals” which represent broad program-level learning-centered goals and “Operational Goals and Outcomes” which are those program-level measures of viability and operational success that are otherwise unrelated to student learning.
-
Reliability
The degree to which the results of an assessment are dependable and consistently measure particular student knowledge and/or skills. Reliability is an indication of the consistency of scores across raters, over time, or across different tasks or items that measure the same thing. Thus, reliability may be expressed as (a) the relationship between test items intended to measure the same skill or knowledge (item reliability), (b) the relationship between two administrations of the same test to the same student or students (test/retest reliability), or (c) the degree of agreement between two or more raters (rater reliability). An unreliable assessment cannot be valid.
-
Rubric
Specific sets of criteria that clearly define for both student and teacher what a range of acceptable and unacceptable performance looks like. Criteria define descriptors of ability at each level of performance and assign values to each level. Levels referred to are proficiency levels which describe a continuum from excellent to unacceptable product.
-
Self-assessment
The process of evaluating one's own learning. The process often includes the ability to judge one's own achievements and performances, understanding how the product or performance was achieved, understanding why one followed the process he or she did, and understanding what might be done to improve the process, product or performance. -
Standards
The level of accomplishment all students are expected to meet or exceed. Standards do not necessarily imply high quality learning; sometimes the level is a lowest common denominator. Nor do they imply complete standardization in a program; a common minimum level could be achieved by multiple pathways and demonstrated in various ways. -
Summative Assessment
The gathering of information at the conclusion of a course or program to improve learning or to meet accountability demands. When used for improvement, impacts the next cohort of students taking the course or program. Examples: examining student final exams in a course to see if certain specific areas of the curriculum were understood less well than others; analyzing senior projects for the ability to integrate across disciplines.
-
Triangulation
Using a combination of assessment measures, from authentic measures that are formative to summative, direct to indirect, and qualitative or quantitative, to external measures, standardized measures, or other surveys, to best measure an outcome.
-
Validity
The extent to which an assessment measures what it is supposed to measure and the extent to which inferences and actions made on the basis of test scores are appropriate and accurate. For example, if a student performs well on a reading test, how confident are we that that student is a good reader? A valid standards-based assessment is aligned with the standards intended to be measured, provides an accurate and reliable estimate of students' performance relative to the standard, and is without easily identifiable or correctable bias. An assessment cannot be valid if it is not reliable.
-
Value Added
The net effect in learning and performance ability that a course or program has on individual students or cohorts of students; the delta as reflected in data from formative to summative assessments.