Print Page | Contact Us | Report Abuse | Sign In | Become a Member of AALHE
EMERGING DIALOGUES IN ASSESSMENT
Share |

Multiple-Choice Assessment in Higher Education:  Are We Moving Backward?

November 14, 2017

Mary Tkatchov, Assessment Manager, University of Phoenix  

Contact information: Maryalice2525@gmail.com  (602) 323-7958

 

Writing a quality multiple-choice assessment is an art form that takes considerable time and skill.  Those who have not attempted to write one might not appreciate the difficulty of authoring three answer choices for each question that are 100% incorrect but still seem plausible. It takes an incredible effort to carefully craft items and ensure that performance on a multiple-choice assessment is reflective of students’ knowledge—not their ability to use clues on carelessly worded items or their misinterpretation of needlessly tricky wording.

Well-designed multiple-choice assessments are useful in a balanced assessment strategy that also incorporates constructed-response and performance assessments. Particularly when feedback is automated and immediate, multiple choice assessments can be used for formative assessment that allows students to monitor their learning (Valen, et al., 2008) or as effective pre-assessment tools to gauge students’ prior knowledge of course content. As summative assessments, they are appropriate when a broad representation of content knowledge must be measured before more authentic demonstrations of knowledge.

It is alarming, though, to hear leaders in higher education institutions recommend adopting predominantly objective, selected-response summative assessments as a solution for reducing the demands on faculty for grading, presumably so that these faculty members could take on more students without being paid more for them.  The fact that forms of assessment are being proposed as a simple solution to scalability issues in growing higher education institutions is of great concern to educators who prioritize the quality of the students’ learning experiences and their ability to transfer their learning to real-life situations.  

Are we in higher education still arguing about the dominance of multiple-choice assessment decades after the outcome-based movement drew attention to the need for more authentic performance assessment in “providing concrete, useful information to parents, employers, and colleges regarding the actual performance abilities of students (Spady, 1994, p.48)?  Using predominantly multiple-choice (or some other version of selected-response) assessments is heading in the wrong direction when the goal is to provide high-quality, career-relevant learning experiences for students.

These leaders might even use research to support arguments that multiple-choice assessment items can be so complex that they measure critical thinking and the higher cognitive levels of Bloom’s taxonomy, such as application and analysis (Tractenberg, et al., 2013; Morrison & Free, 2001).  However, when determining the quality of an assessment method, consider not the difficulty of the assessment item, or the output to the student, but rather the degree of input from the student.  The extent of the students’ input in a multiple-choice assessment is A, B, C, or D.  Students are not challenged or given the opportunity to explain their thought processes or express opinions and support them with evidence.  On that note, consider the following question:

Which of the following is a likely effect on students who graduate from higher education programs having taken mostly selected-response assessments?

 

a.       Underdeveloped writing skills

b.       Underdeveloped verbal communication skills

c.       Underdeveloped medial literacy skills

d.       All of the above

 

This question is rhetorical because of course the answer is “d.”  Yes, a predominantly selected-response assessment approach would make the collection of assessment data and the delivery of feedback to students quick and efficient. When efficiency in assessment becomes the highest priority, however, the price is limited opportunities for students to develop valuable and transferable written and verbal communication skills and many more of the 16 Essential Learning Outcomes featured in the VALUE rubrics from the Association of American Colleges & Universities (Rhodes, 2009).  Such efficiency would also deny students the chance to graduate from a program with evidence of career-relevant learning–not simply test scores, but products that could be included in a portfolio or used on the job.  

If assessment is evidence of student learning, then of what do multiple-choice assessments provide evidence?  Assuming the highest validity and reliability, at most a score on a multiple-choice assessment is evidence of a student’s ability to select the best of provided options.  But in the real world, people are not always going to be given options, and there will not always be an objectively “right” answer; they will need to be able to research problems, collect data, devise workable solutions, and justify their decisions (Marzano, 1994). 

Assessment is not only evidence of learning, but it also an opportunity for learning. Students can reach meaningful learning outcomes, according to the Nine Principles of Good Practice for Assessing Student Learning, through ongoing assessment “whose power is cumulative,” assessment that “entails a linked series of activities undertaken over time” (Astin et al., 1993).  Rather than finding ways to make assessment quicker and more efficient, should we be finding ways to make assessment more meaningful, engaging, and reflective of valuable career and life skills?  If there are staffing and scalability issues that need to be addressed in higher education, the solution does not lie in compromising on purposeful and varied assessment strategies.

 

 

References

Astin, A.W., Banta, T.W., Cross, K.P., El-Khawas, E., et al.  (1993, April). Principles of good practice for assessing student learning. American Association of Higher Education Assessment Forum.

Marzano, R. I. (1994). Lessons from the field about outcome-based performance assessments. Educational Leadership, 51(6), 44-50.

Morrison, S., and Free, K. (2001). Writing multiple-choice test items that promote and measure critical thinking. Journal of Nursing Education, 40, 17-24.

Rhodes, T. (2009). Assessing outcomes and improving achievement: Tips and tools for using the rubrics. Washington, DC: Association of American Colleges and Universities.

Spady, W. (1994). Outcome-based education: Critical issues and answers. Arlington, VA: American Association of School Administrators.

Tractenberg, R.E., Gushta, M.M., Mulroney, S.E., & Weissinger, P.A.  (2013). Multiple choice questions can be designed or revised to challenge learners’ critical thinking.  Advances in Health Science Education: Theory and Practice, 18 (5), 945-61.

 Velan, G.M., Jones, P., McNeil, H. P., and Kumar, R. K. (2008, November 25). Integrated online formative assessments in the biomedical sciences for medical students: Benefits for learning. BMC Medical Education, 8 (52) https://doi.org/10.1186/1472-6920-8-52

 

Emerging Dialogues

Item Name Posted By Date Posted
complaint to commitment.png PNG (21.95 KB) Administration 6/9/2017
Developing Transformative Assessment.png PNG (35.83 KB) Administration 6/9/2017

Connect With Us

Association for the Assessment of Learning in Higher Education
2901 Richmond Road, Suite 130-318
Lexington, Kentucky, USA 40509
Phone: 859-388-0855
Email: info@aalhe.org

Save
Save