Print Page | Contact Us | Report Abuse | Sign In | Become a Member of AALHE
Emerging Dialogues in Assessment
Blog Home All Blogs
Search all posts for:   

 

View all (21) posts »
 

Meaning and usage of “assessment” (Part II): Are you assessing or evaluating?

Posted By Jamie Wigand, Tuesday, December 6, 2016

ecrisp_headshot2016By Erin A. Crisp

In many circles, assessment and evaluation are used interchangeably. Yet in our field, I am wondering if it would be helpful to our practice to more carefully differentiate between the two.

Why does it matter? Why not use them interchangeably in the same way that we often interchange goals, outcomes, and objectives (although I’ve read some good nuanced definitions of these three terms as well)? I’m glad you asked. Besides the fact that the word assessment is in the very name of our organization, I think the ubiquitous use of the term has actually hindered our progress toward meaningful use of learning outcome assessment data.

Recently, Catherine Wehlburg wrote on this blog, “But we in higher education must be always focused on student learning rather than only on the metrics that are required for accountability. We must develop better ways to measure learning and be able to communicate that learning to those outside of our institutions.” I agree completely, and I’m wondering if a good first step wouldn’t be to clarify our uses of assessment (the “student learning” part of Catherine’s statement) and evaluation (the “metrics required for accountability” from Catherine) so that we can build and share resources that reflect the unique purposes of each.

ecrisp_quote_assessment-wish

There are entire fields of practice devoted to both of these areas, so this blog post is obviously not intended to be comprehensive, but if I had an assessment wish, it might be to stop referring to program evaluation as assessment. The National Council on Measurement in Education (NCME) defines assessment as “a tool or method of obtaining information from tests or other sources about the achievement or abilities of individuals. Often used interchangeably with test” (NCME). Most other definitions I’ve found (from our context) point to assessment as a process rather than a tool. Why do we do this when there is already a field of research around the process, and it’s called evaluation?

Thanks to Monica Stitt-Bergh, I was reminded of just how much we (in assessment) could benefit from understanding more about the field of evaluation. The Centers for Disease Control define their program evaluation process as “a systematic way to improve and account for public health actions [involving] procedures that are useful, feasible, ethical, and accurate” (CDC Website). The evaluation handbook of the W. K. Kellogg Foundation outlines a philosophy of evaluation stating that the purpose of evaluation is not as an accountability measuring stick, “but rather as a management and learning tool” (2004, p.3). Sometimes viewed as a process applied only at the end of a project, the Kellogg Foundation outlines an evaluation process that very much resembles the assessment process followed at many institutions (Kellogg, 2004).

As an example of the confusion the multiple definitions can cause, recently, I was working on writing an institutional Assessment Strategy Handbook. For the purpose of the handbook, I did not wish to outline our program review (evaluation) process. I wanted to focus on assessment of learning. The audience for the handbook includes faculty who are new to the assessment committee and faculty serving as assessment fellows to their schools. I thought it would be helpful for them to know our own institutional perspectives on constructing and aligning meaningful learning outcomes; aligning learning activities with learning outcomes; ideas for demonstrations of learning; structuring group work for valid assessment; and methods for providing learners with valuable feedback. These were the guiding topics for my Assessment Strategy Handbook research.

As I searched for resources, I kept finding guides and handbooks that outlined the assessment processes of various institutions. These processes included steps like 1) establish the learning goal, 2) collect student artifacts, and 3) compare those artifacts with the pre-established goal. I kept thinking, “This is your evaluation process. Where are your assessment resources?” I realized that we have an audience problem. The primary audience for the three easy steps process is those who are responsible for the accountability measures. The primary audience for my Assessment Strategy Handbook is those involved in the work of assessing student learning and aligning assessment to outcomes—primarily faculty.

What if we were to use these two words intentionally, staying mindful of the audience we intend? For example, assessment professionals:

  • Align outcomes to activities to demonstrations of learning.
  • Learn to calibrate when using rubrics for the purpose of improved reliability.
  • Learn how to develop a validity argument.
  • Analyze predictive analytics from adaptive learning data and create systems that flag struggling students to trigger an intervention before the student himself even knows that he’s struggling.

When we wear our assessment hats, we are thinking about the measurement of student learning so that we can improve learning experiences or collect better data around what and how students have learned.

ecrisp_quote_evaluation-hat

Evaluation, on the other hand, refers to the process we follow to pull multiple sources of data together to paint a broader picture of student success for a variety of stakeholders. The purpose of evaluation is to “improve the way [something] works” (Kellogg, 2004, p. 3).

  • Evaluation professionals investigate learning and industry needs before designing/redesigning a program.
  • Evaluation professionals compare intended program or course outcomes to aggregate data.
  • Evaluation professionals analyze retention statistics, employment metrics, diversity demographics, library and writing center usage, tutoring metrics, and end of course survey data to inform recommendations and provide evidence of success.

When we wear our evaluation hats, we are thinking about many students’ learning experiences in conjunction with many instructors’ teaching experiences, in connection to the needs of employers and the needs of the university. When I wear my evaluation hat, I’m thinking about resource allocation, communication, change management and systems development.

I’ve been thinking about this for a few months now, and differentiating these two aspects of my job has improved my communication with faculty and my own strategic planning. What do you think? Do you differentiate between assessment and evaluation? How so?


Erin Crisp is the Director of Academic Assessment and Evaluation for the College of Adult and Professional Studies at Indiana Wesleyan University. After teaching middle and then high school English language arts for 8 years, she turned her attention to instructional design for adult learning and especially the use of assessment data. She consulted full-time for Northwest Evaluation Association for several years, leading assessment workshops for K-12 educators around the country, and after a brief stint as an instructional designer in Maryland, returned to her alma matter in Marion, IN where she now lives with her husband and three teenage sons.

Erin is currently pursuing an Ed.D. from Indiana University, holds a master’s in instructional design from Towson University and a bachelor’s in English education from Indiana Wesleyan. Her research interests include adaptive learning, personalized learning, and instructional design for non-cognitive skill development.

Tags:  definition 

Share |
Permalink | Comments (0)
 

Connect With Us

Association for the Assessment of Learning in Higher Education
2901 Richmond Road, Suite 130-318
Lexington, Kentucky, USA 40509
Phone: 859-388-0855
Email: info@aalhe.org

Save
Save