Smiley face

2017 Pre-Conference Workshops

Monday, June 12, 2017, before the conference begins. Cost: $120 full day workshops; $60 half day workshops.

Full Day (9:00 am - 4:00 pm, lunch on own)
1. Assessment Fundamentals: Optimize Teaching and Learning Through Learning Outcomes Assessment
by Monica Stitt-Bergh, Terri Flateby, and George Klemic

2. Designing, Aligning, and Measuring your Campus-Wide Assessment System
by Karen Hicks

3. More than Results: An Advanced Workshop Integrating Assessment with Learning Improvement
by Keston Fulcher and Megan Good
Morning (9:00 am - 12:00 pm)
4. Assessment and More: Rubrics as Tools for Faculty Engagement and Improving Assignments
by Joan Littlefield Cook, Linda Yu, Barbara Bren, and Jolly Emrey

5. Beyond the Basics: What (would-be) Assessment Leaders Need to Know
by Joan Hawthorne, Catherine M. Wehlburg, and Jean Downs

6. Implementing Curriculum Review: From Designing the Process to Using the Findings
by Jane Marie Souza
Afternoon (1:00 pm - 4:00 pm)
7. Effective Data Use: Ask the Right Questions, Get the Right Answers
by Javarro Russell and Ross Markle

8. Evidence-Based Learning Outcomes: Transparent and Developmental Student Assessment through Core Competencies
by Patricia L. Farrell-Cole and Julie Davis Turner

9. Meta-Assessment: Evaluating and Improving the Quality of Academic Program Assessment
by Nick Curtis, Tom Waterbury, and Allison Ames



1. Assessment Fundamentals: Optimize Teaching and Learning Through Learning Outcomes Assessment
Monica Stitt-Bergh, Terri Flateby, and George Klemic
9:00 AM – 4:00 PM, $120 (lunch on your own, 12:00-1:00)

Do you want to build a foundation to use assessment to optimize student learning or evolve academic programs? Have you been asked to participate in assessing student learning outcomes (SLO) in your department and you have no idea what to do? Have you have been assessing SLOs under the direction of others, who are now rotating out of the assessment mix, leaving you to teach others about assessment?

This is the workshop for you.

We will increase your assessment comfort level and knowledge through discussion and activities designed to do the following: identify the purpose(s) of SLO assessment; explain types of assessment; present a typical assessment cycle to highlight the connection between teaching and learning; craft SLO statements; design and use a curriculum map; visit types of direct and indirect evidence of learning; introduce types of tools for assessment; interpret results and identify possible actions to improve teaching and learning.

Participant Learning Outcomes. By the end of the workshop, participants will be able to

a. state the purpose and describe the utility of student learning outcomes (SLO) assessment;
b. describe a generic SLO assessment cycle and apply that generic description to their assessment situation;
c. explain or translate SLO assessment jargon into disciplinary or non-disciplinary specific language d. select appropriate types of learning evidence and tools to evaluate learning;
e. interpret assessment results and list 1-2 possible actions to use the results for learning improvement;
f. synthesize appropriate and available external and internal information about SLO assessment into a coherent and practical plan that can succeed in their venue.

Level: Beginner

Target Audience. We assume the audience members

a. are faculty, administrator, or staff from an academic program or in academic affairs [note: our emphasis is on academic SLO assessment. While the concepts are similar to outcomes assessment in student affairs and co-curricular programs, all of our examples will be from academic programs].
b. have had limited experience with learning outcomes assessment and likely little experience leading an assessment project.
c. have been asked to play a key role or lead role in assessment for a program, college, or institution.
d. want to use assessment as an integral part of teaching and learning.
e. want to gain fundamental knowledge about conducting learning outcomes assessment in higher education for the purpose of student learning improvement.
f. want to be better prepared to attend the AALHE conference in terms of having fundamental knowledge about assessment’s purpose and process.

Activities will include discussions based on scenarios, quizzes, and curriculum map analysis, and the creation of a plan to move assessment forward.

Facilitators
Monica Stitt Bergh is an associate specialist in the Assessment Office at the University of Hawai‘i. Her specialization is in assessing written communication. In her current position, Monica serves as an internal consultant for and offers workshops on learning outcomes assessment, and she plans and conducts institutional assessment projects. She has spent the last eight years working to create a positive view of assessment and increase use of assessment findings. Previously, Monica assisted with the University of Hawai‘i’s writing-across-the-curriculum program and implementation of a new general education program. Her classroom experience includes teaching courses on writing as well as social science research methods. Monica received her BA in English from the University of Michigan and her MA in Composition and Rhetoric and PhD in Educational Psychology from the University of Hawai‘i. She has published and given conference presentations on program learning outcomes assessment in higher education, writing program evaluation, self-assessment, and writing-across-the-curriculum. She is the current President-elect of the Association for the Assessment of Learning in Higher Education and former president of the Hawai‘i-Pacific Evaluation Association.

Terri Flateby is the Associate Vice President of Institutional Effectiveness at Georgia Southern University, a position she assumed after beginning as the Director of Academic Assessment. Prior to coming to Georgia Southern, she was the Director of Academic Assessment at the University of South Florida. Viewing her role to facilitate the understanding that assessment is a faculty-driven process that focuses on the interrelationships of teaching – learning and the use of evidence to guide curricular and instructional changes, she is championing efforts to ensure faculty are recognized and rewarded for engaging in assessment. Terri holds a Ph.D. in Educational Measurement, Research, and Evaluation from the University of South Florida and has presented and consulted on academic, administrative, and student affairs assessment for over twenty years at numerous institutions, both nationally and internationally. She has published on assessment-related topics, including the assessment of writing and thinking and the contributions of assessment to student learning. She is on the Association for the Assessment of Learning in Higher Education Board of Directors.

George G. Klemic, D.B.A., is a professor of Business Administration and Department Chair in the College of Business at Lewis University, in the Chicago suburbs. He has been at Lewis for ten years, prior to that, serving as Associate Professor, Department Chair, and Dean of the School of Business and Management at Notre Dame de Namur University, in Belmont, California. George has served on assessment committees, in both universities, at the department, college, and institutional level. Part of that service included developing and delivering “boot camp-”like training on SLO assessment, coaching and mentoring new assessment recruits, and leading assessment projects. He has presented on SLO assessment at the IUPUI Assessment Institute, at AALHE, at meetings of WASC and the North Central Higher Learning Commission, and at a conference of the Association for Institutional Research. Current SLO interests involve concurrent and coordinated use of SLO assessment and assessment of co-curricular activities to improve SLOs.

Back to top

2. Designing, Aligning, and Measuring your Campus-Wide Assessment System
Karen Hicks
9:00 AM – 4:00 PM, $120 (lunch on your own, 12:00-1:00)

For successful implementation of our student success initiatives, we must have an understanding of an organization’s anatomy and how our assessment work fits within this anatomy. The importance of developing our ability to “synthesize” issues and elements throughout the organization cannot be overstated. Analysis of the issues is important but not sufficient. We must not only look at “performance problems” in detail, but we must also look for how those problems relate to the system structure, culture, conditions, consequences to fully understand how the College assessment system is performing and to select appropriately aligned student success interventions. This workshop will provide participants with a systemic and practical framework for designing, aligning, and measuring a campus-wide assessment system that is aimed at student impact, facilitates a collaborative approach to assessment design and implementation, and generates ongoing feedback about how close (or far) we are from our College-wide assessment system goals.

Participant Learning Outcomes. By the end of the workshop, participants will

a. describe the four levels of alignment in College-wide assessment;
b. select learning interventions aligned to documented gaps in results;
c. map the performance system of your College;
d. align stakeholder expectations to the strategic priorities of your College;
e. align four levels of outcomes to the strategic priorities of your College;
f. design change management strategies for your College-wide assessment plan.

Level: Intermediate

Target Audience. The facilitator assumes the audience

a. has a role in conducting or participating in assessment design and/or implementation;
b. would like to integrate assessment work (make the pieces into the whole);
c. would like to better understand the systemic value of assessment work;
d. would like to have a workable plan to design, align, and measure College-wide assessment work; and e. would like to experiment with many tools that may be applied to the various stages of College-wide assessment design, alignment, and measurement.

Activities include skills inventory self assessment; individual work using worksheets and templates (including a logic model); and pair work in a guided role play.

Facilitator
Dr. Karen Hicks currently serves as the Director of Assessment at Lansing Community College, Lansing, Michigan, adjunct faculty at Wayne State University in the College of Education, and Chief Operations Officer of the Institute for Needs Assessment and Evaluation. Her research and practice is centered on helping organizations build measurement and evaluation capabilities and in demonstrating the strategic value of work through strategic alignment assessment and continuous improvement. Dr. Hicks has published academic and practitioner articles in Performance Improvement, Journal of Business and Technology, Performance Improvement Quarterly, Evaluation and Program Planning, and TD at Work.

Back to top

3. More than Results: An Advanced Workshop Integrating Assessment with Learning Improvement
Keston Fulcher and Megan Good
9:00 AM – 4:00 PM, $120 (lunch on your own, 12:00-1:00)

Use of results for improvement is cited frequently as a major purpose for assessment. Unfortunately, most institutions struggle to connect the results of assessment with student learning improvement. This workshop is designed to clarify improvement and how to achieve it. Attendees will work through the logistics of an example learning improvement initiative and then consider how to adapt the model at their home institutions. In 2015 CHEA recognized this effort through its Award for Outstanding Institutional Practice in Student Learning Outcomes.

Participant Learning Outcomes. By the end of the workshop, participants will

a. describe the prevalence of improved learning in higher education (or lack thereof);
b. explain the various definitions of use of assessment results;
c. explain the PLAIR model of improved learning (Fulcher et al., 2014);
d. identify potential challenges to improving learning;
e. examine and explore improvement implementation plans; and
f. synthesize ideas about how they can help their own institution evidence learning improvement.

Level: Advanced

Target Audience. We assume the audience

a. has moderate knowledge and experience related to assessment;
b. has moderate knowledge and experience related to curriculum and pedagogy; and
c. acknowledges that higher education has yet to fulfill its promise of improving academe.

Activities will include think-pair-share, small and large group discussions, case study analysis, and creation of action plan.

Facilitators
Dr. Keston Fulcher is the Executive Director of the Center for Assessment and Research Studies, Associate Professor, Graduate Psychology, James Madison University. Previously, he served as the Director of Assessment, Evaluation, and Accreditation at Christopher Newport University in Newport News, VA and oversaw assessment, institutional research, and planning at Eastern Shore Community College. Dr. Fulcher’s research focuses on structuring higher education to better demonstrate learning improvement. He has presented across the country on the topic and written a NILOA Occasional paper: A Simple Model for Learning Improvement: Weigh Pig, Feed Pig, Weigh Pig. Additional information at www.psyc.jmu.edu/gradpsyc/documents/Fulcher_CV.pdf.

Dr. Megan Good is the Director of Academic Assessment at Auburn University. She has a doctorate in Assessment and Measurement completed at James Madison University (JMU). At JMU, she served as an assessment consultant in the Center for Assessment and Research Studies and also as an Assessment Coordinator for the Center for Faculty Innovation. Her dissertation focused on how programmatic learning improvement could be achieved by connecting assessment with faculty development work.  Megan joined Auburn in 2015 as the inaugural Director of Academic Assessment. In this role, she supports general education assessment and the assessment work of 250 academic degree programs. To most effectively aid the academic degree programs on campus she created a “meta-assessment” system to help improve the quality of assessment work across campus.

Back to top

4. Assessment and More: Rubrics as Tools for Faculty Engagement and Improving Assignments
Joan Littlefield Cook, Linda Yu, Barbara Bren, and Jolly Emrey
9:00 AM – 12:00 PM, $60

Increasingly, faculty and staff turn to rubrics for assessing student learning, but how do they know if these rubrics accomplish their goals? This workshop will provide an opportunity for participants to walk through a rubric development journey: development, refinement, and application of a reliable assessment tool, including adaptation of a rubric for specific purposes. This workshop will demonstrate how the rubric development process engages faculty/staff in assessment and promotes an understanding of rubrics as tools for improving assignments, instruction and curricula. Through hands-on and interactive exercises, participants will discuss benefits and challenges of developing a rubric, methods to improve reliability and validity, and ways to modify an existing rubric to fit their specific needs. Participants are invited to submit in advance of the workshop examples of assignments that address critical thinking skills, for possible use as examples during the workshop.

Participants will be contacted before the workshop and encouraged to submit assignments via email.

Participant Learning Outcomes. By the end of the workshop, participants will

a. understand the structure and function of rubrics for assessing student learning.
b. gain knowledge and skills needed to successfully develop and calibrate rubrics.
c. gain experience applying rubrics to fit specific program/instructor needs.
d. use rubrics for assignment design or modification.

Level: Beginner. Participants do not need to have prior knowledge of or experience using rubrics.

Target Audience. We assume the audience will consist primarily of faculty/instructional staff and assessment staff who are interested in

a. engaging faculty and staff with assessment of student learning;
b. learning how rubrics can be used to assess student learning and improve instruction; and
c. rubric validity and reliability.

Activities will include small and large group discussions, rubric building, and application of scoring guide to student work.

Facilitators
Joan Littlefield Cook is the Director of Academic Assessment at the University of Wisconsin-Whitewater. She is a professor and former chair of the Psychology Department. She has been involved in assessment at the department, college, university, and system levels through her career, with special emphasis on assessing critical thinking. Joan was integral to her campus’ Degree Qualifications Profile project as well as their recent Higher Learning Council reaccreditation review. Joan’s research focuses on strategy development, and she has co-authored four textbooks and numerous instructional materials. She earned her MS and PhD degrees from Vanderbilt University.

Linda Yu is the chair and professor of Finance and Business Law at the University of Wisconsin-Whitewater. Linda’s research focuses on the area of fixed income, especially on Treasury Inflation Indexed Securities, corporate financial management decisions such as IPOs, and corporate governance. In addition, she has directed or participated in assessment projects at the department, college, and university levels, including projects to assess critical thinking. Linda earned her Ph.D. from the University of Memphis.


Barbara Bren is the Head of Reference and Instruction and the government information librarian at Andersen Library at the University of Wisconsin-Whitewater. In addition to serving as the library liaison for several departments, Barb has lead or participated in multiple assessment projects across campus including assessment of critical thinking, information literacy, and UW-Whitewater’s General Education program. Barb earned her MS degree from the University of Wisconsin-Madison.


Jolly Emrey is the chair and associate professor of Political Science at the University of Wisconsin-Whitewater, and Director of the Center for Political Science and Public Policy Research. Jolly’s research focuses on American politics, including law and courts, public policy, and state and local government. In addition, she has directed or participated in assessment projects at the department, college, and university levels, including projects to assess critical thinking. Jolly earned her Ph.D. from Emory University and an MA degree from California State University, Los Angeles.

Back to top

5. Beyond the Basics: What (would-be) Assessment Leaders Need to Know
Joan Hawthorne, Catherine M. Wehlburg, and Jean Downs
9:00 AM – 12:00 PM, $60

“Beyond the Basics” is designed to help participants develop the skills and knowledge that will enable them to work productively in positions that include some responsibility for leadership in assessment. This is not designed for those new to the assessment field, but rather those who have been involved in assessment and want to work toward leadership positions in the field. Such positions can be programmatic or institutional, and can range from serving on a committee charged with assessment oversight to spearheading assessment activities for a program or entire institution. Attendees will practice standard assessment techniques like curriculum mapping, assessment reviews, and designing effective assessment strategies, but they will move beyond “how-to-do” such activities into consideration of WHY such an activity or strategy might be useful in a particular situation. The focus is on developing skills that can enable collegial partnerships to support assessment aimed at improving student learning.

Participant Learning Outcomes. By the end of the workshop, participants will be able to

a. implement and evaluate assessment practices that are typically viewed as standard in the field.
b. evaluate assessment practices from a program or institutional cost-benefit perspective in order to determine the value of a practice in a specific context.
c. describe and use strategies for working collaboratively with faculty to support assessment that supports improved student learning.
d. articulate ways in which stakeholder expectations for accreditation and accountability intersect with assessment decisions.

Level: Intermediate

Target Audience. We assume the audience members

a. include faculty and staff who may want to move (or have recently done so) from “doing assessment” to “leading assessment.”
b. have specific assessment responsibilities, although those responsibilities may be as members of assessment committees.
c. have uneven levels of assessment skills (e.g., “knowing how” a department meets specific program accreditor expectations, but unaware that other kinds of assessment activities may be equally appropriate in another context).
d. include individuals who may “know the basics” but are less confident in evaluating options in order to help colleagues develop productive assessment strategies.

Activities will include small and large group discussions and activities.

Facilitators
Dr. Joan Hawthorne oversees assessment and institutional accreditation at the University of North Dakota. She also teaches for the Honors program and for graduate programs in higher education housed within UND’s Departments of Teaching & Learning and Educational Leadership. Her research is primarily in assessment, often in relationship to general education programs. She is also interested in teaching and learning change efforts, driven by assessment, that occur beyond the level of the individual course. She is co-facilitating, with a colleague from the teaching center, a revamp of the core curriculum within UND’s College of Business & Public Administration. A current research project, conducted in conjunction with colleagues from the teaching center and the Biology Department, investigates the impact of moving the six-course core curriculum of an undergraduate science program into large-scale (180-seat), active learning classrooms. Dr. Hawthorne is a frequent presenter at AALHE, AAC&U General Education & Assessment and the Higher Learning Commission. She serves as a board member for AALHE, and as an Academy Mentor and consultant for the HLC.

Dr. Catherine M. Wehlburg is the Associate Provost for Institutional Effectiveness at Texas Christian University. She has taught psychology and educational psychology courses for more than a decade, serving as department chair for some of that time and then branched into faculty development and assessment. Dr. Wehlburg has worked with both the Higher Learning Commission of the North Central Association and the Commission on Colleges with the Southern Association of Colleges and Schools as an outside evaluator. In addition, she has served as editor of To Improve the Academy and is currently the Editor-in-Chief for the New Directions in Teaching and Learning series. Dr. Wehlburg regularly presents workshops on assessment, academic transformation, and the teaching/learning process. Her books include Promoting Integrated and Transformative Assessment: A Deeper Focus on Student Learning and Meaningful Course Revision: Enhancing Academic Engagement Using Student Learning Data.  She earned her Ph.D. in educational psychology from the University of Florida. She is currently the President of AALHE and has served on the AALHE board for several years.

Jean Downs is as Instructional Designer and Assessment Specialist for Partner in Publishing, where she supports publishers to create effective and personalized Open Educational Resources (OER) and textbooks for institutions of higher education. Prior to this position, she served as Dean of Institutional Effectiveness and Assessment at Del Mar College in Corpus Christi, Texas. She has coordinated academic and co-curricular learning outcomes assessment, program review, and assessment reporting for five years within the mission of the community college. Jean has experience with accreditation requirements and reporting through her positions at higher education institutions accredited by the Higher Learning Commission (HLC) and the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC). She is a PhD candidate in Education at Capella University, enrolled in the Instructional Design for Online Learning specialty. Within AALHE, she is an active member of the Member Services Committee and a co-editor for Emerging Dialogues.

Back to top

6. Implementing Curriculum Review: From Designing the Process to Using the Findings
Jane Marie Souza
9:00 AM – 12:00 PM, $60

Periodic curriculum review is essential to maintaining a quality educational program. While faculty and administrators may clearly agree with that, implementation of the review process may be much less evident. Questions abound: How do we schedule the review? How long should it take? How are duties assigned? How do we manage the process? What should we look at when reviewing individual courses? What evidence do we use to support our conclusions? And perhaps most importantly: How do we plan for use of our findings? The answers to these and other common questions will be explored in this pre-conference workshop. Participants in the workshop will be provided handouts including a set of possible research questions, a sample evidence bank, and tools to align course-level assessments. They will then be tasked with using the tools to outline a process to fit their unique educational settings.

Participant Learning Outcomes. By the end of the workshop, participants will

a. outline a plan and timeline for a curriculum review process.
b. draft research questions to guide an effective curriculum review.
c. identify appropriate sources of evidence to address research questions.
d. outline a process to follow-through on review findings.

Level: Intermediate

Target Audience. The facilitator assumes the audience

a. has some knowledge of program outcomes;
b. has been involved with or plans to be involved with a curriculum review process, but does not have a clear picture of how to accomplish it – perhaps had a negative experience; and
c. is in the position to work with faculty on reviewing course data.

Activities will include discussion; completing sample assignment worksheets, assessment tracking map and a sample action plan; and reviewing research questions & evidence bank.

Facilitator
Jane Marie Souza, PhD serves as the Assistant Provost for Academic Administration/Chief Assessment Officer, University of Rochester. A board member of the Association for Assessment of Learning in Higher Education, she is an editor for the organization’s publication, Intersection. Dr. Souza has served on accreditation teams for multiple agencies including New England Association of Schools and Colleges, Middle States Commission on Higher Education, Accreditation Council for Pharmacy Education, and Council on Podiatric Medical Education, where she is also a member of the Council. Dr. Souza has served as an assessment consultant for institutions across the country, offering workshops on the use of technology in the classroom, mapping curricular outcomes, and meeting accreditation standards through effective assessment. She has presented at conferences including the Association for Institutional Research, Association for the Assessment of Learning in Higher Education, Assessment Institute in Indianapolis and Association for Medical Education in Europe. Dr. Souza was keynote speaker at the Assessment Institute at Indianapolis 2015 and the Drexel Assessment Conference 2016.

Back to top

7. Effective Data Use: Ask the Right Questions, Get the Right Answers
Javarro Russell and Ross Markle
1:00 -4:00 PM, $60

Seemingly every institution is interested in using data to improve student learning and student success. Gathering data for these purposes allows institutions to explore and perhaps demonstrate the effectiveness of their educational programs, activities, and support systems in fostering student learning, and ultimately increasing student success. While institutions generally have plenty of data sources, there are often challenges to turning those data into information, and that information into action. During this session, we will demonstrate the common and important questions that institutions are asking about student success and student learning. We will explore various data requirements and methodologies for answering such questions. Lastly, we will work through the articulation of the relationship between student learning and student success. Attendees can hope to gain a toolbox of skills for using their data to explore, inform, and improve student learning and success within their institutions.

Attendees are encouraged to bring a computer and data sources to use as reference throughout the workshop.

Participant Learning Outcomes. By the end of the workshop, participants will

a. identify compelling questions about student success and student learning that encourage the closing of the assessment loop.
b. identify types of data required to answer specific questions about student learning and student success
c. identify analyses for uncovering the performance of students on learning outcomes.
d. identify analyses for uncovering the key drivers of student success.

Level: Intermediate

Target Audience. We assume the audience

a. has worked with several different types of assessments of student learning.
b. has an understanding of Microsoft excel (and perhaps some other statistical software, such as SPSS)
c. has opportunities at their institution to meet with faculty members regarding assessment activities
d. conducts assessment at multiple levels (e.g., program and institutional).
e. has at least some interest in student learning and student success.

Activities include group discussion, identification of data sources, and creation of an analysis plan for the participant’s institution.

Facilitators
Javarro Russell, Ph.D. is a Senior Research and Assessment Director in the Global Education Division at Educational Testing Service in Princeton, NJ. Javarro obtained his doctorate in Assessment & Measurement from James Madison University. He has a background in consulting on measurement and assessment design issues in higher education. In his current role, he assists institutions in identifying solutions to assessing and measuring student learning outcomes on their campuses. Javarro also specializes in identifying effective ways of reporting assessment results to audiences with varying levels of expertise in assessment and measurement.

Ross Markle is a Senior Research and Assessment Director for the Higher Education Division at Educational Testing Service. In his role, he supports ETS’ thought leadership efforts in higher education by collaborating with operational and research areas, as well as the higher education community. Ross also works directly with colleges and universities to promote the effective use of assessments and data in student success efforts, focusing on the areas of holistic advising, course placement, institutional planning, predictive modeling, and supporting traditionally underserved populations. He has also worked in ETS’ Research and Development Division, researching the assessment of noncognitive and 21st century skills, student success, and student learning outcomes assessment in higher education. Prior to joining ETS’ Higher Education Division, Ross obtained his Ph.D. in Assessment & Measurement Psychology from James Madison University, and also served as the Director of Co-curricular Assessment and Research at Northern Kentucky University.

Back to top

8. Evidence-Based Learning Outcomes: Transparent and Developmental Student Assessment through Core Competencies
Patricia L. Farrell-Cole and Julie Davis Turner
1:00 -4:00 PM, $60

To define explicit expectations of complex learning and to motivate student achievement, our faculty developed Core Competencies centering on four domains: knowledge, research, communications, and ethical/professional practice. The competencies form an annual, formative review of student progress toward expected learning outcomes. Our experience with this powerful tool demonstrates that diverse disciplines and levels of students benefit from competencies customized to their programs. In this workshop, participants will assemble developmental competencies specific to their training program(s). Effective assessment will be a focus of the workshop, resulting in preliminary rubrics for each participant. Attendees who desire student achievement metrics in all types of applied, performance, or professional programs will apply our findings directly, and other attendees with program goals that include professional behaviors, higher-order skills, and research-based projects will find this workshop exciting and instructive. Additional benefits include application to student progression, program successes (or failures), and continuous improvement.

Participant Learning Outcomes. By the end of the workshop, participants will learn how to

a. identify the unwritten curriculum in each participant’s program/major;
b. design a set of Core Competencies based on each participant’s program learning outcomes;
c. apply a developmental progression (e.g. rubric-based assessment) to attain program learning outcomes; and
d. use transparent processes of formative assessment to support teaching and mentoring.

Level: Intermediate

Target Audience. We assume the audience members are

a. faculty who are part of their program’s assessment or program review team. These individuals may want to learn how other institutions are focusing on competencies or may want to improve their model. They may also serve on the program review team and want to figure out ways to obtain input on student learning through course-taking, professional development, assistantships, internships, etc. As faculty, they will want to know the resources and processes we used to develop the competencies, but they may or may not have much assessment experience outside of assessing student learning in their own classes.
b. administrators who oversee or participate on program, college or university review teams. They will have an understanding of assessment as it pertains to student learning outcomes, but they may be struggling with how to specifically go about authentic and formative assessment.
c. assessment/evaluation specialists who administer or lead assessment at the program, college or university level(s). These individuals are the content experts for assessing student learning, and they are looking for different ways to conduct formative student assessment that may be novel or outside their purview. In addition, they may be looking to have deeper conversations with others who are experts or who are liked-minded.

Activities include think-pair-share, small and large group discussions, and the development of a set of core competences using a design-thinking approach.

Facilitators
Patricia L. Farrell-Cole, Ph.D. Patricia is the evaluation specialist for the Van Andel Institute Graduate School (VAIGS). She assists or leads evaluation of courses, core competencies, program review, student and faculty satisfaction, alumni, and other education efforts. She has her PhD in Higher, Adult and Lifelong Education from Michigan State University and M.A. in Organizational and Adult Learning from the University of New Mexico. Patty worked as an organizational development specialist for the Intel Corporation, but the past 16 years has been in academia and has included academic advising, evaluating programs and organizations, leading statewide programs, policy research, and teaching undergraduate and graduate courses.

Julie Davis Turner, Ph.D. As Associate Dean at the Van Andel Institute Graduate School (VAIGS), Julie Davis Turner, PhD, focuses on curriculum innovation and educational development, as well as program evaluation and accreditation.  Curricular innovation, continuous program review, and full-circle feedback combine to fuel improvement in our PhD program by identifying important “next steps” for VAIGS. Student success is a primary area of her execution in the work of VAIGS, complementing curricular design and educational development among faculty.  With less than 30 students, VAIGS delivers individualized instruction and professional development for each doctoral candidate with the mission to develop leaders in biomedical research.  Julie’s experience spans 20 years of teaching in the public and private sectors, designing and evaluating novel STEM curriculum, directing practice-based assessment initiatives, and developing programs to bring high-content to a low-stress environment.  Since arriving at VAIGS, she designed, executed, and evaluated peer mentoring programs among faculty, graduate students, interns, and postdoctoral fellows to facilitate their scientific growth.

Back to top


9. Meta-Assessment: Evaluating and Improving the Quality of Academic Program Assessment
Nick Curtis, Tom Waterbury, and Allison Ames
1:00 -4:00 PM, $60

Assessment is increasingly practiced in higher education (Ewell, 2009, Kuh, 2009; Hart Research Associates, 2016). Less common, however, are high expectations for the quality of assessment work. By quality, we mean assessment that answers important questions, produces results that are trustworthy, and leads to logical interventions to improve programs. It follows that low quality assessment impedes interpretation and the ability to use results. From this perspective, James Madison University developed a meta-assessment process to evaluate program-level assessment reports and provide specific feedback to academic programs. These reports are evaluated on a rubric consisting of 14 elements, each scored using a 4-point, behaviorally anchored scale. There is extensive reliability and validity evidence to support the inferences made from this process. Participants will be introduced to this rubric, which is perhaps the most comprehensive in higher education, and will learn to apply these skills to assessment reports at their own institutions.

This workshop will be a much more extensive and in-depth version of a webinar that two of the primary presenters facilitated for AALHE in October, 2016.

Participant Learning Outcomes. By the end of the workshop, participants will

a. describe a general, six-step assessment model and the meta-assessment process;
navigate JMU’s meta-assessment rubric;
b. practice using JMU’s meta-assessment rubric to evaluate a real academic program assessment report; and
c. provide feedback to their home institutions about the value of meta-assessment and the logistics of implementing the process.

Level: Beginner

Target Audience. We assume the audience

a. values the role of assessment in higher education;
b. wishes to increase the quality of the assessment processes and information gathered from those processes;
c. is looking for ways to partner with faculty and involve them in assessment; and
d. has a working knowledge of how assessment is currently practiced at their own institution

Activities will include discussing in small and large groups, applying the rubric, writing comments on assessment reports and receiving feedback.

Facilitators
Nick Curtis serves as the lead assessment consultant to academic programs at James Madison University. In this role, he works to assist programs at JMU seeking to measure the impact of their program on students. Specifically, he works with programs to develop clear and measurable goals and objectives, to identify appropriate assessment instruments, to plan well-designed studies, to assist in statistically analyzing the data, to help programs communicate results, and, most importantly, to help programs apply results to inform meaningful change. He received his master’s and educational specialist degrees in school psychology and is currently pursuing a doctorate in Assessment and Measurement at JMU.

Tom Waterbury serves as an assessment consultant to academic programs at James Madison University. In this role, he works to assist programs at JMU seeking to measure the impact of their programs on students. Specifically, he works with programs to develop clear and measurable goals and objectives, to identify appropriate assessment instruments, to plan well-designed studies, to assist in statistically analyzing the data, to help programs communicate results, and, most importantly, to help programs apply results to inform meaningful change.  He received his master’s degree in Social Studies Education from the University of Virginia and is currently pursuing a doctorate in Assessment and Measurement at JMU.

Dr. Allison Ames is an assistant assessment specialist in the Center for Assessment and Research Studies and an Assistant Professor at James Madison University in the department of Graduate Psychology. Dr. Ames is also coordinator of the Quantitative Psychology Concentration within the Psychological Sciences Master’s program and the faculty supervisor of academic degree program assessment. Dr. Ames’ research focuses on improving student learning using assessment methods and results, as well as measurement methods in higher education.

Back to top

Share: