- About AALHE
- Annual Conference
- Assessment Book Club
- Community Calendar
- Twitter Chats
- Member Resources
- Support AALHE
|EMERGING DIALOGUES IN ASSESSMENT|
Assessment Works! But It Needs More Work…
January 12, 2016
Jean Downs, Director of Assessment, Del Mar College, Corpus Christi, TX
In a commentary posted in The Chronicle a few weeks ago, Dr. Erik Gilbert asked, “Does Assessment Make Colleges Better? Who Knows?” The discussion that followed – on our campuses, listservs, and social media – initiated spirited debates and dialogues about decades-old arguments regarding the value of assessment and learning outcomes.
The irony of Gilbert’s rhetorical question – which has not been lost on outcomes assessment and institutional effectiveness specialists – is that the very agencies that mandate processes for learning outcomes assessment have no hard data on how compliance improves the colleges and universities that use these processes. Dr. Matt Fuller remarked, “As a profession aimed at investigating and documenting the impact of others’ impact (usually faculty, but also student affairs and administrative units), it is all the more challenging – if not hypocritical – that assessment’s impact on the broader context and outcomes of learning remains latent and quantitatively unexamined.”
Gilbert’s question is actually a reverberation of a dialogue between Dr. George Kuh and The Chronicle in 2009, when the National Institute of Learning Outcomes Assessment (NILOA) was approaching its first birthday.
Q. “You’ve talked about how even solid assessment data may or may not make sense to an applicant. How can colleges make information about learning outcomes more accessible and more relevant to prospective students and parents?”
A. Kuh: “…the challenge is that most parents of prospective students are not asking institutions for this information. There are people who are demanding that information on behalf of the public, but John Q. Public, he just doesn’t know what to look for.”
Kuh’s answer may be true of John Q. Public, but is not true of Gilbert, who has been doing course-level assessment for fifteen years as well as sitting on his institution’s Learning Outcomes Assessment Committee. One of Gilbert’s points is that he is knowledgeable about what to look for when helping his son with a college search – but even a more experienced consumer like Gilbert notes that he has never personally seen any evidence that a college was “just nailing their student learning outcomes!”
Gilbert’s lack of concern about inquiring about learning outcomes at prospective colleges follows from the well-documented and enduring belief that learning outcomes assessment is done solely to meet accreditation requirements. Other ‘believers’ who have, as Dr. Lisa Ncube says, “drank the assessment kool-aid,” consider that while the “vast majority of faculty and staff…find the actual assessment worthwhile, especially formative assessment,” she adds that “(they) find the reporting requirements, hugely, a waste of time.” Her observation highlights that faculty do acknowledge the fundamentally different purposes of assessment for improvement of learning and assessment documentation for accreditation. In the National Institute for Learning Outcomes Assessment’s (NILOA) inaugural Occasional Paper, Peter Ewell (2009) points out that since the early days of the assessment movement, the two purposes of outcomes assessment – improvement and accountability – have never rested comfortably together.
Dr. Joan Hawthorne and countless other assessment administrators try to remind faculty that they are, indeed, already doing “assessment” – and engaging in the type of assessment that is “effective for promoting greater thoughtfulness and purpose in teaching.” Perhaps the reason that they don’t recognize that they are already doing assessment is that the way that faculty and administrators define “assessment” is vastly different. Fuller wrote, “faculty may not call it ‘assessment,’ and when they discuss it, it may not match accreditation agencies’ preferred language. Still, when assessment of student learning is done, it brings a great deal of value to the faculty and student relationship—but it still often falls short of accreditors’ needs.”
Dr. Fuller attributes some of the tension between the purposes of assessment to a cultural divide between faculty and administrators. At Sam Houston State University, Fuller examines the influences on institutional cultures of assessment through the national Surveys of Assessment Culture. When asking administrators and faculty to identify the primary reason assessment is conducted at their institution, the most frequently answered response from faculty was accreditation (39.8% of faculty surveyed); improvement of student learning accounted for 32% of faculty responses. Assessment administrators, on the other hand, most frequently saw the improvement of student learning (38.6%) as the primary purpose for assessment; accreditation accounted for 37% of the responses from assessment administrators. While the differences between these two groups’ perspectives on the primary purpose of assessment are not stark, educators should be most concerned that less than 40% of either group saw student learning as the primary reason for assessment. It appears that things haven’t changed much since AALHE discussed the results of the 2014 Faculty Survey of Assessment Culture.
Research by Fuller also supports the dichotomous views of assessment observed by Dr. Ncube and others. Dr. Sharon Bailey wrote, “It was obvious in the (Chronicle article) comments section that ‘assessment’ refers to the kind of assessment that everyone hates, while the assessment that everyone loves goes by a bunch of different names, such as ‘improving my teaching’ or ‘DQP’ or ‘thinking about what we could do better.’” Yes, people really do love learning outcomes assessment.
Results from Fuller’s study indicate that (gasp!) faculty care about assessment (80.8% indicated caring about assessment on their campus), but they don’t like the “other stuff” that gets done to them in the name of assessment, (i.e. 67.7% of faculty see assessment as an exercise in compliance; 59.3% say assessment “goes nowhere”; 52% see it as a necessary evil). Despite all of this, only 35% of faculty see it as a punishment, and only 33.5% see it as a threat to academic freedom. Yet, 77.5% say assessment really supports student learning (despite all the aforementioned disdain for it) and only 33.6% said that if assessment were not required they would not be doing it.
So where do we go now? Kuh and colleagues (2015) have written that this era’s question has become “what have students learned, not just in a single course, but as a result of their overall college experience” (p. 3)? As institutions of higher education, we need to move to a higher level of organizational learning agility; we cannot simply respond to the pressures of rising college costs, public credibility and performance-based funding in a rapidly changing global marketplace by throwing our academic arms in the air and saying, “Who knows?”
Gilbert’s final sentence states, “We should no longer accept on faith or intuition that learning-outcomes assessment has positive or consequential effects on our institutions – or students.” No, we definitely should not. But consider an alternative to this statement:
“we should no longer accept on faith or intuition that what/how we teach students has positive or consequential effects on our students – or their learning.”
The fact that the majority of those reading this now believe this sentence to be true is a testament to the benefits of the learning outcomes assessment movement – whether you conduct assessment for the purpose of improving learning, or for accreditation. Assessment works. But it needs more work.