- About AALHE
- Annual Conference
- Assessment Book Club
- Community Calendar
- Twitter Chats
- Member Resources
- Support AALHE
|EMERGING DIALOGUES IN ASSESSMENT|
Dead Bunnies are Not the Problem
April 17, 2016
When initially reading the recent commentary (http://chronicle.com/article/A-President-s-Plan-to-Steer/234992) by Katherine Mangan and the special reports (http://chronicle.com/specialreport/Uproar-at-Mount-St-Marys/30?rc-sidebar) that followed regarding Mount St. Mary’s University and their President’s plan to encourage at-risk students to withdraw earlier (rather than later) in the semester, I was struck with the fact that there are two competing issues at play. Of course, the quote that is reportedly said by the President to a faculty member indicates a disdain for students who are potentially failing out of the university. A faculty member reported that he said “…you think of the students as cuddly bunnies, but you can’t. You just have to drown the bunnies … put a Glock to their heads.” This is, actually, quite shocking to read. In higher education we have been faced with shooting events on campus and to use this type of an analogy certainly shows poor judgment. But this is not the real issue.
The real problem is not “dead bunnies.” The most pertinent and truly pressing issue is the system that is in place in higher education that rewards high retention and completion rates (as though these were a valid measure of how well an institution is serving its students and the community) rather than rewarding actual learning and critical thinking. As I have no knowledge of this President or his contract, I can only guess at what he may have been told. He may have been charged with increasing the retention rate. I know that many states and the federal government is moving more toward performance based funding and the rewards for this are much needed resources. Now, please understand that I believe that retention and completion are important – I hope that our students come to our institutions, find a good fit, learn a lot, and stay until they graduate within four years. However, these are not the real measures of learning and they do not measure whether or not an institution is doing what its mission states.
So, then, what is that magical measure? What is the metric that we should all be using to show that we are meeting the statements in our mission, vision, and values statements? Ah, there’s the rub. We don’t have one. As someone who works in the assessment and accreditation area at my institution, I work to help faculty and programs identify what specifically students should gain by successfully completing that program or major. There are some wonderful measures of learning on my campus and on every campus that I’ve ever visited. However – they are not the same measures across campus or institutions and any one (or even several of them) don’t tell the full story what is learned. But, I can assure you, the story is not told accurately or completely by looking at the types of measures that are usually requested by state boards of education or even accreditors.
That leaves higher education in a terrible bind. We know that students come to our campuses and that many of these students learn a lot, change the way they think, become better problem-solvers, and are able to leave after successful graduation to do some really good things in the world. But this type of (often) anecdotal is not enough to tell our story. We must develop ways of sharing measures of learning that can be read and understood by those outside of the academy. And, we must be bold enough to ask the difficult questions about student learning that will help us to change and improve higher education. If something isn’t working, we need to know as soon as we can so that we can modify our approach.
There are many who are working on better measures of learning – there is the Degree qualifications Profile (http://degreeprofile.org/) that the Lumina Foundation has funded that might provide a different way to share what credentials students are gaining throughout their college career; there are also multiple movements to show authentic assessment in the form of ePortfolios or other capstone-type projects. There are also others including the Voluntary System of Accountability (VSA – http://www.aplu.org/projects-and-initiatives/accountability-and-transparency/voluntary-system-of-accountability/) and the College Portrait (http://www.collegeportraits.org/) that have tried to discover ways to show some of the complexities of higher education and student learning. But none of these systems have been embraced fully. There is – most certainly – a disconnect between what is required by states and accreditors and what individual faculty and student affairs staff do to promote student learning in our institutions. Humans, in general, and college administrators, in particular, work hard to do the things that will be rewarded. Therefore, if the requirements given are to increase retention, there are ways to do that that may not be what was initially desired. Anyone who has taken an introductory statistics class knows that.
And that is what may be happening at Mount St. Mary’s University. In attempting to meet goals set by those who may not understand or be interested in student development and student learning, we may all be diverted into a rabbit hole. And, while President Newman’s “dead bunnies” may help to improve the statistic provided about retention, this process does not help improve student learning and what we do on our campuses. The faculty at Mount St. Mary’s clearly knew that something was wrong and they voted “no confidence” in their president. President Newman did, eventually, resign after their accreditor began asking questions about the institution and the events that took place there (https://www.washingtonpost.com/news/grade-point/wp/2016/02/29/mount-st-marys-future-direction-on-the-table-as-leaders-meet-today/).
But we in higher education must be always focused on student learning rather than only on the metrics that are required for accountability. We must develop better ways to measure learning and be able to communicate that learning to those outside of our institutions. If we don’t do this, our state legislators and those in the federal government will do this for us. And that will not be a good thing.
Catherine M. Wehlburg is the Associate Provost for Institutional Effectiveness at Texas Christian University and the President-Elect for the Association for the Assessment of Learning in Higher Education (AALHE)