By David K. Dirlam
Dr. Matthew Fuller, Associate Professor of Higher Education Leadership at Sam Houston State University, started this three-part Emerging Dialogue series on the meaning and usage of the term assessment. His foremost concern is to enable us to focus on student learning without the distraction of standardized, aggregated, and summative accountability. One solution that Fuller proposed was to replace the term assessment with either evidence or inquiry. In Part II of this series, Erin Crisp, the Director of Academic Assessment and Evaluation for the College of Adult and Professional Studies at Indiana Wesleyan University, added the important distinction that assessment and evaluation are different processes for different purposes.
The purpose of the Association for Assessment of Learning in Higher Education (AALHE), as an organization, is to foster effective assessment practice to document and improve student learning. The organization identifies assessment as “a tool to help understand learning and develop processes for improving it.” During my 300 interviews of experts (teachers or mentors) in roughly 100 different fields, three general methodologies for improvement have emerged: science, design, and interpretation.
First, science progresses through replicable observations of phenomena that had previously been insufficiently observed, and that often depend on a well-trained community using verifiable methods for data collection, analysis, and application. Scientists record their progress in reports containing descriptions of the context, methods, analysis, and interpretation of the observations.
Second, design progresses through the refinement of products and services that ultimately enrich the lives of individuals or communities. Designers record their progress in the public usage of their refinements.
Third, literature progresses through reinterpreting texts through the application to new contexts. Literary writers record their progress in stories and interpretive ideas that enter into the beliefs, dialogue and identity of communities.
So, do these Emerging Dialogue contributions on the meaning of assessment add to the science, design, and interpretation of the term? The most obvious answer involves interpretation. Both authors would have our community be identified as those who care about improving the lives of learners and teachers. This identity overlaps, as Crisp noted, but also contrasts with those whose primary purpose is to accredit whether programs deserve to receive continued funding. I do not have the empirical observations to test the result, but my impression is that a vast majority of AALHE members would agree with both of them.
There are also design implications for both contributions. Fuller notes the anger of many faculty over the accreditation connotations of assessment. I believe that this has much to do with faculty intuitions that the “closing the loop” approach espoused by many accreditors too often results in sloppy design and even sloppier science. Teachers who care enough about assessment to participate in Dr. Fuller’s Faculty Survey of Assessment Culture are likely to have much in common with teachers who care about the learning of individual students. Such teachers observe learning every day. Many care little about factoids or short answers that produce good performance on standardized tests.
Such “academic autopsies” (assessments administered at the end of programs) can have some benefit. For example, I once introduced a course in physiological psychology to a program because prior seniors performed so poorly on a standardized test that reported counts of items in the content area. We did not have the laboratory equipment to really make it a course in scientific learning, but most of students had enough prior experience with designing and carrying out studies that contributed to psychology to be able to imagine some of the process. Those experiences were part of a planned succession of opportunities that included real-time feedback and culminated in original work presented at conferences. My favorite result of the physiological course was one student becoming so enamored with the possibilities that she pursued the field and now has nearly 100 publications in it. My course design was too full of factoids to satisfy either my practice-oriented philosophy or a teacher like Rob Dillon (the former College of Charleston tenured biology faculty fired because he would not create learning outcomes for his course that the administration recognized). But fortunately that one student had enough imagination and experience to create a design for herself that surpassed what I had offered. Her commitment was supported by assessment oriented toward the “more learning- and student-centered philosophy” that practitioners such as Fuller propose.
From my context, distinguishing evaluation from assessment is useful but the term assessment can also be useful because it admits all three methodologies of science, design, and interpretation better than inquiry or evidence. Consider changing the name of the organization AALHE. I believe our membership and support from institutions would dwindle if we called ourselves either the “Association for Inquiry into Learning in Higher Education” or the “Association for Evidence of Learning in Higher Education.” Nevertheless, the arguments of Fuller and Crisp are very cogent; the improvement of learning needs better inquiry and evidence, and the improvement of programs needs more comprehensive evaluation than are currently being offered to administrators and accreditors.
Dr. Dirlam is the first paying member of AALHE, has been on the AALHE Member Services Committee since its inception, and currently serves on the Board of Directors. He is the author of the new book Teachers, Learners, Modes of Practice: Theory and Methodology for Identifying Knowledge Development.