- About AALHE
- Annual Conference
- Assessment Book Club
- Community Calendar
- Twitter Chats
- Member Resources
- Support AALHE
|EMERGING DIALOGUES IN ASSESSMENT|
Assessment in Externally Accredited Programs
March 13, 2018
Creating and maintaining a strong assessment program can be a challenge in any institution. Conveying the value and reasoning for assessment seems to be a part of every-day communication for folks working in assessment. When so much of my work involves access to the assessment work faculty are doing, consistent and full engagement on their part is immensely important. The idea of externally accredited programs being exempted from our campus-wide assessment process was recently mentioned at my university, and my immediate thought was:
“Why would any program be exempted from our university assessment plan?”
I had to take a step back to consider our
current process, and think about why our campus collects and uses student
learning data from program assessment. I had a few different thoughts run
through my head about external accreditation and home-university assessment
policy as I began crafting a response to the idea of a college-based exemption:
● The purpose and use of our campus-wide assessment plan
● An accreditor’s idea of assessment v. Dearborn’s idea of assessment
● Format v. content of assessment reports
● What does ‘exemption’ even look like
I began first by looking at how other assessment professionals work with externally accredited programs with different assessment practices than their own. I reached out to assessment and accreditation people through the ASSESSlistserv community and was pleasantly met with a number of responses and conversation about program assessment on different campuses. Colleagues such as Laura Williams from Western Governors University, Karla Sanders from Eastern Illinois, Linda Suskie, and many more shared their thoughts and practices. Their responses highlighted the importance of full participation in assessment at all levels, and why including everyone in assessment helps to unite programs and use student learning data more effectively and strategically.
Aligning course, program, and institutional student learning goals so that these measures are representative of the actual knowledge and skills that students who graduate from the university is a large part of program assessment. Program learning goals should be mapped to institutional-level goals which should be tied to the university’s mission and vision. The alignment of learning goals, including mission and vision, throughout the institution, helps to establish a culture of intentional and meaningful assessment.
On my campus, and many others, there is a faculty-led and owned committee (the Assessment Subcommittee) that coordinates and drives assessment activities, while as the Assessment Manager, I manage and advise on these matters too. To effectively measure or monitor institutional learning goals, as well as have insight into student learning on academic program levels, the Assessment Subcommittee asks that all programs within the university follow the university assessment plan. This plan requires programs to submit assessment reports biennially to the Assessment Manager and the Subcommittee for review through a centralized cloud storage space. As Assessment Manager, my role is to review all of the submitted reports and to have insight into all programs’ assessment data and work, and be able to bring that insight into higher level conversations at the university about student learning and student success. The Subcommittee created a report template that includes the components of the assessment process deemed important to capture and reflect upon on a regular basis. A common rubric is also used to assess the reports- giving constructive feedback to a rotating sample of programs.
For some programs, an external accreditor may also have assessment guidelines or requirements. As many colleagues pointed out, these guidelines can differ from locally developed practices. Accreditors sometimes focus on meeting standards that are industry-specific, rather than overarching as institutional goals tend to exemplify more. The reporting guidelines may focus on different ideas or areas in assessment than home institutions. Our campus for example, like many others, has focused recently on closing the loop activities or, how are programs actually using the data they collected for program revision and improvement. This area is of high importance in assessment reports, whereas this may not always be the case with external accreditors who may not focus as much on reflection and action. In most cases at Dearborn, to avoid additional work for faculty, the Assessment Subcommittee has agreed to accept assessment reports in alternative formats as long as the same components are present in the report and the overall guidelines are in line with the university’s assessment plan.
Suzanne Thomas of Medical University of South Carolina suggested the importance of working to decrease burden on faculty, while maintaining a university standard. The sentiment was echoed by Linda Suskie, who reminded me that a flexible approach in both reporting structure and timeline, while creating work on my end, makes assessment more approachable and meaningful to faculty. As Peter Ewell discussed at length in his 2009 occasional paper for the National Institute for Learning Outcomes Assessment, assessment is often conflated with accountability. While accountability is necessary in its own right, I try not to use the words compliance or pull the Higher Learning Commission (HLC) regulations card too often when I talk with faculty because I want to break assessment’s tie to something that is forced and mandated. Rather, I want to encourage programs and faculty to want to own their curriculum and their pedagogy, empower them to see how their work impacts students and the university as a whole, and to see that outside of templates and deadlines- there is value in assessment and in a collaborative assessment environment. While the format and reporting structure may differ, the expectation of all programs to participate in the university-wide assessment plan, with the collaborative environment of the Assessment Subcommittee built in, remains constant regardless of external accreditation status. This practice seems to be in line with other institutions, as many of the responses suggested their assessment guidelines are similar to Dearborn’s. LaMont Rouse, of the College of New Jersey, brought up the idea of faculty being isolated in their own disciplines and failing to see how they are institutionally intertwined. Including all programs in campus assessment expectations helps to reiterate the idea of connectedness between colleges, programs, and faculty. The call for central oversight is not punitive, rather, it empowers a higher level, cross-college body of faculty to coordinate assessment, which breaks down barriers between units and encourages collaboration and consistency where it makes sense, and adaptability where it does not.
Although faculty are constantly assessing in their own courses and programs on almost a daily basis through observation and conversation, being able to show tangible evidence of student learning requires more than verbal confirmation that students are learning. This direct evidence and qualitative reflection is what is asked of programs in their program assessment data and reports that are submitted to the Assessment Subcommittee. Asking that all programs share their program assessment data and reports with the Assessment Subcommittee, whether it follows Dearborn assessment guidelines or an external accreditors’ guidelines, helps the university community engage with student learning at every level and intentionally think about how each program feeds into improving institutional goals and student success. Without access to certain colleges or programs, the Assessment Subcommittee would not be able to accurately analyze and discuss student learning across the university as a whole without substantial gaps of information. Exempting a program from reporting or sharing requirements excludes the great work being done in those programs from the analysis and conversations, and leaves the program disconnected. As assessment and student outcomes become increasingly woven into strategic planning and overall institutional effectiveness, access to data, knowledge, and experience and faculty buy-in is crucial. Beyond federal compliance, maintaining consistency between units and all of the bureaucratic/administrative reasons, collecting, using, and sharing program assessment data is best practice in higher education. Student learning impacts student success, and in a time where improving graduation and retention rates is at the forefront of strategic plans, full and meaningful assessment has a key role in helping in the effort to move those numbers forward.
 Ewell, Peter. (November, 2009). Assessment, Accountability, and Improvement: Revisiting the Tension. (NILOA Occasional Paper No. 1).