2018 Pre-Conference Workshops

Monday, June 4, 2018 before the opening conference plenary. Cost: $120 full day workshops; $60 half day workshops.

Full Day (9:00 am - 4:00 pm, lunch on own)
1. Assessment Fundamentals: Optimizing Teaching and Learning Through Learning Outcomes Assessment
by Teresa Flateby, Allen Dupont, George G. Klemic

2. Evidence-based Assessment: Developing and Validating Instruments for Criterion-Referenced Measures
by Myrah R. Stockdale, Julie Davis Turner

Morning (9:00 am - 12:00 pm)
3. Assessment in Competency Based Education: An Introduction
by Laura M. Williams, Sean Gyll, Maren Toone

4. The Power of Formative Assessment, Feedback, and the Science of Learning
by Ronald Carriveau

5. Leadership of Assessment: Tips for Applying Leadership Theory to Assessment Leadership
by Matthew Fuller

6. Meta-Assessment: Building an Impactful Process for Your Campus
by Chris Coleman, Katie Boyd
Afternoon (1:00 pm - 4:00 pm)
7. Visualizing Assessment Data through Interactive Dashboards: Creating a Basic Report
by Frederick Burrack, Chris Urban

8. Tuning as Assessment: Leveraging Faculty Buy-in, Improving Student Learning
by Nancy Quam-Wickham, Norman Jones

9. Creating a Faculty-Centric Approach to Successful Assessment and Accreditation
by Penny Bamford, Valerie Landau, Christine Broz

10. An Agatha Christie Approach to Solving the Mystery of Assessment Practice
by Cynthia Howell, Barbara J. Keener

11. Improving Improvement: Engaging Students in the Assessment Process
by Nick Curtis, Julie McDevitt, Robin Anderson



1. Assessment Fundamentals: Optimizing Teaching and Learning Through Learning Outcomes Assessment
Teresa Flateby, Allen Dupont, George G. Klemic
9:00 AM – 4:00 PM, $120 (lunch on your own, 12:00-1:00)

Do you want to build a foundation to use assessment to optimize student learning or evolve academic programs? Have you been asked to participate in assessing student learning outcomes (SLO) in your department and you have no idea what to do? Have you have been assessing SLOs under the direction of others, who are now rotating out of the assessment mix, leaving you to teach others about assessment?

This is the workshop for you.

We will increase your assessment comfort level and knowledge through discussion and activities designed to do the following: identify the purpose(s) of SLO assessment; explain types of assessment; present a typical assessment cycle to highlight the connection between teaching and learning; craft SLO statements; design and use a curriculum map; visit types of direct and indirect evidence of learning; introduce types of tools for assessment; interpret results and identify possible actions to improve teaching and learning.

Participant Learning Outcomes. By the end of the workshop, participants will be able to

a. state the purpose and describe the utility of student learning outcomes (SLO) assessment;
b. describe a generic SLO assessment cycle and apply that generic description to their assessment situation;
c. explain or translate SLO assessment jargon into disciplinary or non-disciplinary specific language d. select appropriate types of learning evidence and tools to evaluate learning;
e. interpret assessment results and list 1-2 possible actions to use the results for learning improvement;
f. synthesize appropriate and available external and internal information about SLO assessment into a coherent and practical plan that can succeed in their venue.

Level: Beginner

Target Audience. We assume the audience members

a. are faculty, administrator, or staff from an academic program or in academic affairs [note: our emphasis is on academic SLO assessment. While the concepts are similar to outcomes assessment in student affairs and co-curricular programs, all of our examples will be from academic programs].
b. have had limited experience with learning outcomes assessment and likely little experience leading an assessment project.
c. have been asked to play a key role or lead role in assessment for a program, college, or institution.
d. want to use assessment as an integral part of teaching and learning.
e. want to gain fundamental knowledge about conducting learning outcomes assessment in higher education for the purpose of student learning improvement.
f. want to be better prepared to attend the AALHE conference in terms of having fundamental knowledge about assessment’s purpose and process.

Facilitators
Terri Flateby is the Associate Vice President of Institutional Effectiveness at Georgia Southern University, a position she assumed after beginning as the Director of Academic Assessment. Prior to coming to Georgia Southern, she was the Director of Academic Assessment at the University of South Florida. Viewing her role to facilitate the understanding that assessment is a faculty-driven process that focuses on the interrelationships of teaching – learning and the use of evidence to guide curricular and instructional changes, she is championing efforts to ensure faculty are recognized and rewarded for engaging in assessment. Terri holds a Ph.D. in Educational Measurement, Research, and Evaluation from the University of South Florida and has presented and consulted on academic, administrative, and student affairs assessment for over twenty years at numerous institutions, both nationally and internationally. She has published on assessment-related topics, including the assessment of writing and thinking and the contributions of assessment to student learning. She is on the Association for the Assessment of Learning in Higher Education Board of Directors.

Allen Dupont, PhD, is the Director of Institutional Effectiveness at the University of Tennessee Health Science Center. Prior to joining UTHSC, Allen was Director of Assessment for Undergraduate Academic Programs at North Carolina State University. Allen began his career as a faculty member at Young Harris College. Allen holds the PhD in Economics from Louisiana State University. He has presented and consulted on assessment, institutional effectiveness, and accreditation at numerous conferences and institutions. He regularly serves as a reviewer and committee chair for SACSCOC. He holds a USPA B license (B-49616).


George G. Klemic, D.B.A., is a professor of Business Administration and Department Chair in the College of Business at Lewis University, in the Chicago suburbs. He has been at Lewis for ten years, prior to that, serving as Associate Professor, Department Chair, and Dean of the School of Business and Management at Notre Dame de Namur University, in Belmont, California. George has served on assessment committees, in both universities, at the department, college, and institutional level. Part of that service included developing and delivering “boot camp-”like training on SLO assessment, coaching and mentoring new assessment recruits, and leading assessment projects. He has presented on SLO assessment at the IUPUI Assessment Institute, at AALHE, at meetings of WASC and the North Central Higher Learning Commission, and at a conference of the Association for Institutional Research. Current SLO interests involve concurrent and coordinated use of SLO assessment and assessment of co-curricular activities to improve SLOs.

Back to top

2. Evidence-based Assessment: Developing and Validating Instruments for Criterion-Referenced Measures
Myrah R. Stockdale, Julie Davis Turner
9:00 AM – 4:00 PM, $120 (lunch on your own, 12:00-1:00)

Assessment professionals consistently request more approachable, methodologically-grounded professional development opportunities. Given this need, we believe that psychometric and evaluation specialists should partner to translate and de-mystify these fields and promote effective assessment. Our interactive, full-day session will employ assessment models and approaches that bridge disciplines of psychometrics and evaluation. Psychometric literature is often mathematically dense and theoretically bound while evaluation can be viewed as qualitative or judgment-focused. Fortunately, these fields complement when properly reassembled in applied assessment. This workshop will begin by discussing approaches to “standard setting” (e.g., cut scores for proficiency), then consider programmatic limitations, and finally investigate specific methodologies. The afternoon methodology discussion will illuminate pros and cons of statistical, measurement, and psychometric models, applying mixed methods to a variety of evaluation projects. Participants are encouraged to bring institutional projects with methodological questions; alternately sample cases will be provided. Workshop outcomes include evaluation of data sets, standard setting, and initial validation approaches.

Participant Learning Outcomes. By the end of the workshop, participants will

a. Exhibit increased level of competence and confidence with methods and approaches to standard setting, criterion-based assessment, and decision-making, thereby becoming more versatile evaluators;
b. Identify areas for improvement within their own institutional environment in a safe, low-stakes environment, either by developing their own instrument or by practicing on an existing instrument with accompanying historical data and examples;
c. Walk away with actionable steps to justify decisions of validation and with enhanced confidence to find resources to use this new methodology

Level: Intermediate

Target Audience.

a. Assessment/Evaluation Specialist who administers or leads assessment at the program, college, or university level(s). These individuals may work with content experts in assessing student learning, and may seek novel ways to conduct formative student assessment outside their purview. In addition, they may be looking to have deeper conversations with others who are experts or who are liked-minded.
b. Faculty who are part of their program review team or promotion-and-tenure process. These individuals may want to learn how other institutions are focusing on student learning outcomes or may want to improve their model. They may also serve on the program review team and want to determine input on assessment and evaluation of students and/or faculty. As faculty, they will want to know the resources and processes we used to develop the competencies, but they may or may not have much assessment experience outside of assessing student learning in their own classes.
c. Administrators who oversee or participate on program, college or university review teams. They will have an understanding of assessment as it pertains to student learning outcomes, but they may be struggling with how to specifically go about authentic and formative assessment.

Facilitator
Myrah R. Stockdale, MS (ABD) As the Director of Assessment at Campbell University’s College of Pharmacy and Health Sciences (CPHS), Myrah R. Stockdale, MS (ABD) concentrates on developing and innovating pragmatic assessment processes utilizing various psychometric, measurement, and evaluation tie-ins. Myrah oversees 11 degree programs, reporting to 8 specialized accreditors. Psychometrics and program evaluation are Myrah’s passion; she is a PhD Candidate at the University of North Carolina at Greensboro in Educational Research Methods with an anticipated graduation date of August 2018. Since arriving at CPHS, she has redesigned the college’s assessment plan; developed and implemented an interdisciplinary integrative co-curricular model; trained faculty, students, and staff on methodological, teaching and learning, and cognitive processes; and built a culture of assessment and communication among the faculty.

Julie Davis Turner, PhD As Associate Dean at the Van Andel Institute Graduate School (VAIGS), Julie Davis Turner, PhD, focuses on curriculum innovation and educational development, as well as program evaluation and accreditation. Curricular innovation, continuous program review, and full-circle feedback combine to fuel improvement in our PhD program by identifying important “next steps” for VAIGS. Student success is a primary area of her execution in the work of VAIGS, complementing curricular design and educational development among faculty. With less than 30 students, VAIGS delivers individualized instruction and professional development for each doctoral candidate with the mission to develop leaders in biomedical research. Julie’s experience spans 20 years of teaching in the public and private sectors, designing and evaluating novel STEM curriculum, directing practice-based assessment initiatives, and developing programs to bring high-content to a low-stress environment. Since arriving at VAIGS, she designed, executed, and evaluated peer mentoring programs among faculty, graduate students, interns, and postdoctoral fellows to facilitate their scientific growth.

Back to top

3. Assessment in Competency Based Education: An Introduction
Laura M. Williams, Sean Gyll, Maren Toone
9:00 AM – 12:00 PM, $60

Competency-based education (CBE) is becoming a buzzword in higher education, with more and more institutions and programs considering and adopting a competency-based approach. However, tried and true methods for academic assessment may not be as easily transferrable as it may appear on the surface. This session will focus on defining and measuring outcomes in competency-based education. We will begin with a discussion of what CBE is and how to write meaningful competencies. Next, we will address the design and development considerations of assessment for competency-based learning. Finally, we will provide an overview of quality measures for competency-based assessment, including a discussion of psychometric issues.

Participant Learning Outcomes. By the end of the workshop, participants will

a. Define competency-based education and its appropriate applications;
b. Be able to apply best practices in competency writing;
c. Identify appropriate assessment methods based on a set of competencies;
d. Determine the psychometric veracity of a competency-based assessment.

Level: Beginner

Target Audience.

a. Currently have or are considering a competency-based approach to their courses and/or programs
b. Are current assessment practitioners in the higher education space;
c. Have a working knowledge of direct assessment strategies;
d. May not be comfortable or familiar with psychometric principles.

Facilitators
Laura M. Williams is the Manager of Assessment Development for General Education at Western Governors University. In that role, she supervises a team of Assessment Program Managers and Assessment Developers and provides direction and strategy for assessing General Education and ensuring high-quality assessments. Prior to WGU, Laura worked in various areas of higher education, including student affairs and assessment. She earned a PhD in Assessment and Measurement from James Madison University, where her dissertation focused on examinee motivation in low-stakes testing environments and value-added models of measurement in higher education.

Sean Gyll is Senior Manager of Psychometrics at Western Governors University. His primary focus is on helping organizations to establish strategic assessment-related standards and practices, with an emphasis on the assessment value chain and its impact on learning outcomes. Sean received his Ph.D. in Educational Psychology – Cognition and Student Learning Program – from the University of Utah where he conducted research in the acquisition of cognitive skills, undetected errors in skilled cognitive performance, and priming processes in memory.

Maren Toone is the Manager of Assessment Development for the College of Health. She joined WGU in 2014 and has supported Academic Programs as an assessment developer and the manager of assessment improvement. In her current role, Maren manages the design, development, and support of WGU assessment programs. In her time at WGU, Maren has developed performance assessment and rubric strategies and implemented data-driven quality improvement initiatives for WGU’s portfolio of assessments. Prior to WGU, Maren was a project manager for an education delivery system and the Director of Student Support Services for an online vocational training institution.

Back to top

4. The Power of Formative Assessment, Feedback, and the Science of Learning
Ronald Carriveau
9:00 AM – 12:00 PM, $60

Formative assessment using quizzes, tests, and exams has been shown to be one of the most powerful strategies to ensure that students do well on summative assessments and that retention rates increase. This interactive workshop clarifies the concepts of formative assessment, summative assessment, and competency and shows how outcome based assessment that includes the science of learning is used for a learning measure and a strategy for teaching and learning. A coding system that links test item response to outcome statements and replaces grades with outcome attainment values will be demonstrated, including how to map course-level outcome attainment values to program and institutional goals.

Participants will be contacted before the workshop and encouraged to submit assignments via email.

Participant Learning Outcomes. By the end of the workshop, participants will

a. Understand the concepts of formative assessment, summative assessment, feedback and competency.
b. Know how to use assessment as a strategy for teaching and learning to ensure student success.
c. Make connections to the brain and learning in terms of formative assessment.
d. Know how to create a three-level model for developing and measuring outcome statements and reporting outcome attainment at all levels. e. Be familiar with technology that supports formative assessment and a three-level model.

Level: Intermediate.

Target Audience.

a. Are faculty, specialists, and administrators who value the role of assessment in higher education.
b. Are faculty, specialist, and administrators who want to increase the quality and validity of their assessment process and the information gathered from the assessment process.
c. Are faculty, specialists, and administrators who have an understanding of assessment as it pertains to student learning outcomes but are struggling with validity and reporting issues.
d. Are assessment and evaluation specialists who administer or lead assessment at the program, college, or university level(s) and are open to innovation.

Facilitators
Ronald Carriveau earned a Ph.D. in Educational Psychology, Measurement and Methodology, from the University of Arizona. He has extensive experience in learning and assessment as a consultant, test publisher, director for testing and evaluation for a large metropolitan school district, state test director and deputy associate superintendent for standards and assessment for the Arizona Department of Education, and assessment development manager for Harcourt Assessment, Inc. He taught at the elementary, middle school, and university levels and held the university position of Associate Professor in Educational Psychology. Currently Dr. Carriveau is a Teaching Excellence Consultant and the Outcomes and Assessment specialist for the University of North Texas (UNT) Center for Learning Enhancement, Assessment, and Redesign (CLEAR), and for 5 years was the assistant director for the University’s quality enhancement plan (QEP) for accreditation. He currently serves on the advisory board for the University’s new QEP program. He has consulted nationally and internationally, presented at national and international conferences, including ten workshops at the IUPUI assessment institute, and given keynote talks. He is a chapter contributor, the co-author of Next Generation Course Redesign (2010, Peter Lang), and the author of Connecting The Dots: Developing Student Learning Outcomes and Outcome Based Assessments (2nd Edition, 2016, Stylus Publications). He is also the author of a reading assessment program and an outcomes and assessment tools website.

Back to top

5. Leadership of Assessment: Tips for Applying Leadership Theory to Assessment Leadership
Matthew Fuller
9:00 AM – 12:00 PM, $60

Assessment officers are important leaders of institutions of higher education. Yet, many assessment leaders say they have never been introduced to leadership theories that can guide how they manage and lead assessment units or institutions. Leadership is contextual, cultural, and challenging at times. This session will serve participants by introducing a number of leadership theories than can be used in responding to many of the typical challenges assessment leaders face. Participants will be asked to reflect on contexts of their institution, articulate desired goals and outcomes for their institution and institutional culture of assessment, and implement responses to case studies in assessment leadership. Participants will find the session to be fun, energetic, restorative, and directly applicable to their role as assessment leaders at their institutions.

Participant Learning Outcomes. By the end of the workshop, participants will be able to

a. Accurately articulate the content and tenets of at least two leadership theories that could apply to their assessment practice.
b. Articulate and reflect on the usefulness of employing leadership theory in their practice of assessment.
c. Apply leadership theory to the solution of common and complex assessment challenges.

Level: Intermediate

Target Audience.

a. Is comprised primarily of administrators responsible for leading assessment functions on their campus.
b. Has reflected on how they lead their organizations.
c. Has little knowledge about leadership theories that could apply to their practice.
d. Has experienced challenges from faculty and staff on their campus as they lead their institutional assessment efforts AND that the assessment leaders in attendance are looking for guidance in how to lead through these challenges.

Facilitators
Matthew B. Fuller, Ph. D. is Associate Professor of Higher Education Leadership and Director of the SHSU Doctoral Program in Higher Education Leadership. His scholarly interests include higher education assessment, legal issues, history, rural student engagement, and financial aid policy. Dr. Fuller is Principal Investigator for the Survey of Assessment Culture, an international, annual survey of faculty, administrators, and student affairs staff members’ perspectives on institutional cultures of assessment. Dr. Fuller has served as a faculty member or administrator in residence life/student affairs, assessment, accreditation, faculty governance, academic colleges, and Provost offices at Texas A&M University, The University of Alaska-Southeast, Illinois State University, and Sam Houston State University. His recent scholarly pursuits focus on the development of a holistic approach to assessment and institutional research as it relates to leadership, legal matters, finance, and governance in higher education.

Back to top

6. Meta-Assessment: Building an Impactful Process for Your Campus
Chris Coleman, Katie Boyd
9:00 AM – 12:00 PM, $60

Meta-assessment, or the process of evaluating the quality of program assessment, can be surprisingly beneficial (Fulcher & Good, 2013; Ory, 1993). In fact, it has recently been recognized by CHEA as an institutional best practice (James Madison University, 2015 Award). The approach allows an institution to 1) set clear expectations for assessment reporting, 2) monitor the pulse of assessment activities campus-wide, 3) identify areas for intervention, and 4) demonstrate—to internal and external audiences—improvement in assessment quality over time. However, building a meta-assessment process is challenging in that it involves many potential components and design decisions (Fulcher, Coleman, & Sundre, 2016). In this session, we will share perspectives on key decision factors from two “rival” institutions (Auburn and Alabama) that are three years into meta-assessment implementation. Workshop participants will be invited to create action plans for introducing meta-assessment on their campuses.

Participant Learning Outcomes. By the end of the workshop, participants will

a. Articulate the nature and benefits of a meta-assessment process.
b. Analyze key design factors in the decision-making process, weighing pros and cons as well as situational considerations for their own campus.
c. Draft a meta-assessment action plan and elevator pitch to bring back to their home institution.

Level: Intermediate

Target Audience.

a. Has a role in coordinating, directing, or reporting on assessment activities above the program level (e.g., institution-wide; or for a specific college/division within the institution);
b. Would like to see more systematic, impactful, and demonstrable assessment on their campus; and
c. Would like to see a higher level of engagement in assessment on their campus (a focus on improved student learning rather than mere bureaucratic reporting).

Facilitator
Dr. Chris Coleman is the Associate Director of Institutional Effectiveness at the University of Alabama (UA) and the lead architect of UA’s meta-assessment process, which fosters high-quality assessment across both educational programs and support/administrative units. He completed his doctorate in Assessment and Measurement at James Madison University, where his dissertation focused on the psychometric consequences of using negative wording and keying in survey items. A trained SACSCOC IE Reviewer, Dr. Coleman previously designed and managed a variety of assessment and evaluation projects at Babson College and the University of Georgia. Dr. Coleman’s publications include journal articles and book chapters on the assessment of student learning outcomes, the validity of standardized psychoeducational tests, and the diagnosis of reading and writing disorders in the adult population. He is also a reviewer for several journals, including Research & Practice in Assessment. In a previous century, he taught English in the US, the Czech Republic, and Greece.

Dr. Katie Boyd is the Associate Director of Academic Assessment at Auburn University and leads programming to support assessment activities of all academic degree programs at Auburn. Her role is largely focused on improving the quality of programmatic assessment by implementing meta-assessment across the approximately 250 programs offered. Collaborative in nature, Auburn’s meta-assessment provides high quality and consistent feedback to programs. She graduated from Virginia Tech with a PhD in Industrial/Organizational Psychology, where her research was focused on measuring implicit leadership theories and the impact of subordinate relationships on leader behaviors. While in Virginia she helped prepare academic leaders for their new leadership roles and studied the dynamic relationships between department chairs and faculty. Katie worked as the Director of Analysis with Acelero, an early childhood education provider with a unique outcomes-focused approach to providing educational services to young children, prior to becoming an Assessment Specialist at Auburn University. Her research and career has largely focused on improving higher education operations, from academic leadership to her current focus on programmatic student learning.

Back to top

7. Visualizing Assessment Data through Interactive Dashboards: Creating a Basic Report
Frederick Burrack, Chris Urban
1:00 -4:00 PM, $60

This workshop will walk users through transforming assessment scores into interactive dashboards that enable faculty, staff, and administrators to more deeply engage with direct assessment results. Data preparation, model creation, and report design will be shown using provided sample data files. Processes learned will be transferrable to a variety of visualization technology platforms. For this workshop to fully engage with the session, participants will need a windows laptop with the free Power BI desktop client installed.

Participant Learning Outcomes. By the end of the workshop, participants will

a. Create interactive reports and dashboards from direct and indirect assessment data.
b. Learn several techniques for preparing assessment-related data for visualization.
c. Understand how data tools can be used to automate data extraction, transformation, analysis and reporting.
d. Experience the use of key functions of visualization software, which can be adapted to many uses within and outside assessment and to multiple technology platforms.

Level: Intermediate

Target Audience.

a. Has some experience with data manipulation and analysis in Excel or another similar tool.
b. Is in a position that interacts with direct or indirect assessment data their campus.
c. Wants to reduce the data collection and reporting burden placed on faculty.
d. Has a windows laptop with Excel installed that can run Power BI.

Facilitators
Frederick Burrack is Director of Assessment, Professor of Music Education, Distinguished Graduate Faculty, and Graduate Chair for Music at Kansas State University. He joined the Kansas State music faculty as a music education specialist in Fall 2005. Dr. Burrack taught instrumental music education at Ball State University from 2002-2005 and instrumental music in the Carroll Community School District in Carroll, Iowa from 1982-2002. He currently is overseeing the national pilot of Model Cornerstone Assessments to be aligned with the new National Standards for Music Education. Dr. Burrack was recently awarded Distinguish Graduate Faculty. His research interests include student learning processes and assessment of learning, cross-disciplinary instruction, and instructional thought development in music teachers. He guides professional development seminars across the United States, has numerous publications in music education and assessment journals, and has presented many conference sessions nationally and internationally.

Chris Urban joined K-State’s Office of Assessment as Assistant Director in Spring 2013. He has used Power BI to facilitate assessment work since 2015. He served the two years prior in national service as an AmeriCorps*VISTA volunteer at Metropolitan Community College in Omaha, Nebraska. During his time as a VISTA, Chris worked to improve educational outcomes of military and veteran students by establishing a one-stop student services office. Chris has a B.S. in Economics and an M.A. in English-Cultural Studies, both from K-State. He is currently working toward a Ph.D. in Student Affairs in Higher Education.

Back to top

8. Tuning as Assessment: Leveraging Faculty Buy-in, Improving Student Learning
Nancy Quam-Wickham, Norman Jones
1:00 -4:00 PM, $60

This interactive, collaborative workshop will introduce the Tuning process to participants. Working in small groups, participants will collectively define an “intractable curricular problem” at an institution. Such problems might include curricular bloat, general education revision, program reorganization, and retention and graduation issues (including improving assignments). Participants will then use the Tuning process to develop solutions to these problems. The workshop will provide participants with knowledge of the elements of the process, its disciplinary and regional adherents in American higher education (including the state of Utah), and how to introduce Tuning on their own campuses. Because Tuning has been endorsed by disciplinary organizations, it is an ideal process for promoting faculty discussion of and participation in assessment and initiatives to improve student learning. This workshop utilizes some of the insights from design thinking methods to facilitate resolutions to difficult problems where faculty input and engagement is necessary to bring about meaningful institutional changes. Tuning also promotes the idea that assessment is a dynamic, relational practice leading to enhanced educational opportunities for students, not only the practice of documenting student learning.

Participant Learning Outcomes. By the end of the workshop, participants will learn how to

a. Describe the Tuning process and its value in faculty engagement;
b. Apply elements of the Tuning process to resolve institutional problems; and
c. Identify and explain how employing the Tuning process can improve student learning

Level: Intermediate

Target Audience.

a. Assessment professionals looking to increase faculty engagement on their campuses but have yet to identify a meaningful process to do so
b. Faculty interested in assessment (or curricular reform) who may need ways to communicate the value of assessment to their colleagues, and who could benefit from introduction to the process
c. Administrators looking to bridge the divide between institutional effectiveness offices and both teaching faculty and faculty assessment committees

Facilitators
Norman L. Jones is professor of history at Utah State University. Author of eleven books and numerous articles on the Elizabethan era, he is a prize-winning teacher. He writes and consults on higher education issues, especially on General Education, the Degree Qualification Profile, and Tuning. A Senior Fellow of the AAC&U, he serves on many boards, including the Assessment Advisory Board of the National Institute for Outcomes Learning Assessment and the DQP/Tuning Advisory Committee for the Lumina Foundation. He is Chair of the College Board’s Advanced Placement Higher Education Advisory Committee. For many years he was Director of General Education and Curricular Integration at Utah State University. Before that, he chaired the History Department for fourteen years. He chairs the Utah Regents’ General Education Task Force, the body that oversees transfer, articulation and assessment in the Utah System of Higher Education. In that role, he has organized and led the “What is an Educated Person?” Conference for twenty years, promoting discussions among Utah’s faculty and administrations about general education pedagogy and assessment. In 2013 he was a Fulbright Fellow in Hong Kong, aiding the transition from three-year to four-curricula, as the Hong Kong universities added general education to their curricula. He continues to teach history.

Nancy Quam-Wickham is a Senior Assessment Specialist at Washington State University, and Professor of History and Chair (Emerita, 2005-2015) at California State University, Long Beach. She has been active in the Tuning Initiative of the American Historical Association, a participant in the Measuring College Learning project of the SSRC, and is a NILOA/DQP Coach. Author of numerous articles, she is interested in working with faculty to improve student learning.

Back to top

 


9. Creating a Faculty-Centric Approach to Successful Assessment and Accreditation
Penny Bamford, Valerie Landau, Christine Broz
1:00 -4:00 PM, $60

Learn how to use five key foundations to create a faculty-centric culture of continuous improvement that sparks faculty curiosity assess and improve. Learn a few simple but effective incentives can motivate faculty to participate in the Scholarship of Teaching and Learning, experiment with new pedagogy, and to participate actively in assessment. In this hands on workshop you will experiment with the framework, tools, and policies that transformed our culture at “mach speed” according to the WASC accreditation site visiting team.

Participant Learning Outcomes. By the end of the workshop, participants will

 

a. Explore innovative policies and tools to engage faculty in the practice of continuous improvement;
b. Discuss foundational faculty-centric frameworks;
c. Hands-on experience using the tools and applying them to their own institution.; and d. Complete a first draft of a template and toolkit to create a plan for creating cultural change within their organization.

Level:Intermediate

Target Audience.

a. Faculty and assessors who are passionate about student learning and improving the culture of teaching and learning in their institutions.
b. Those who are open to ideas and strategies for creating different models and those requiring new skills, attitudes and knowledge about how to move forward with tested models.

Facilitators
Valerie Landau is Director of Assessment at Samuel Merritt University where she designs tools and methodologies for continuous improvement for teaching and learning. From 2002-2008 she worked closely with Douglas Engelbart running an international Educational Networked Improvement Community and is co-author of book “The Engelbart Hypothesis: Dialogs with Douglas Engelbart." Based on her work with Engelbart she created a learning analytics platform for assessment of educational effectiveness .The result, an engaged faculty committed to continuos improvement in teaching, learning and assessment at organizational change at “Mach speed” according to accreditors. She spent her early years in Cuba and Nicaragua and was trained by Brazilian educator, Paulo Freire, to be Regional Director of the National Literacy Campaign in Nicaragua. She is co-authoring a research project on assessment of medical education in Cuba and has led multiple high-level research delegations to Cuba. She designed award-winning educational games and videos and in 2001 wrote the seminal book in online learning “Developing an Effective Online Course.” She received an MA in Education and studied Human Development and Psychology with an emphasis in Technology in Education at the Harvard University Graduate School of Education.

Penny Bamford, PhD is Director of the Teaching and Learning Excellence Group at Samuel Merritt University.

Christine Broz is Senior Instructional Designer at Samuel Merritt University.

Back to top

10. An Agatha Christie Approach to Solving the Mystery of Assessment Practice
Cynthia Howell, Barbara J. Keener
1:00 -4:00 PM, $60

This session will provide participants with an Agatha Christie analytical approach, as in Murder on the Orient Express, to solving the mystery of two of the most challenging stages of the assessment process: 1) designing specific and realistic assessment outcomes; 2) applying assessment results for continuous improvement, including revisions and enhancements to the assessment process itself. Participants will engage in reflection, analysis, and discussion of cases of assessment practices in higher education and will examine and apply actionable strategies for their own professional practice. Presenters will facilitate roundtable teams’ deliberation and lead case debriefing and analysis.

Participant Learning Outcomes. By the end of the workshop, participants will

 

a. Diagnose common problems with the development of learning outcome statements.
b. Identify effective language and structure for designing learning outcome statements.
c. Design effectively stated learning outcomes.
d. Design effective methods for interpreting assessment data results.
e. Formulate strategies for utilizing assessment results for continuous improvement.

Level:Beginner

Target Audience.

a. have some experience with assessment but would benefit from clarification of the steps;
b. are in positions in higher education that require them to plan for and conduct assessment effectively;
c. are interested in strategies that will enable them to simplify and expedite the assessment process.

Facilitators
Cynthia Howell serves as core faculty for Leadership for Higher Education in the graduate School of Education at Capella University, an inaugural recipient of the national Excellence in Assessment Designation. She teaches courses on assessment, education program evaluation, advising and retention, and others. She also serves as a SME (subject matter expert) for the development of courses in the program and serves on dissertation committees as chair or member. Prior to joining Capella in 2004, she taught at Black Hills State University and served as Dean of Academics at a technical college and a private career college. She holds a doctoral degree in Higher Education Leadership from Northern Arizona University. Her publications and presentations have focused on learning outcomes assessment, retention and resilience in adult learners, and effective teaching, mentoring, and learning in higher education. Her primary professional passion now is to facilitate the development of a culture of assessment in higher education.

Barbara J. Keener is an associate with the Institute of Higher Education, College of Education, University of Florida. Prior to this position, she served in faculty and administrative positions with universities, private liberal arts colleges, community/state colleges, higher education alliances, and organizations. She also conducts research and guides projects addressing higher education leadership, including assessment, across institutional departments and programs. She has a variety of studies and reports featured in higher education journals and related publications. She has experience as an outside evaluator for accreditation by the Southern Association of Colleges and Schools. In her previous experience as core faculty for Capella University, she was a recipient of ten consecutive Teaching and Learning Excellence Awards.

Dr. Keener is a frequent presenter at academic conferences and seminars, including the Council for Student Retention Data Exchange’s National Conference on Student Retention. She holds a doctoral degree in Higher Education Leadership from the University of Florida, where she served as a W.K. Kellogg Fellow for Community College Leadership. Previous professional experience includes workshops at the Noel-Levitz National Conference on Student Recruitment, Marketing, and Retention and the National Council for Student Development.

Back to top

11. Improving Improvement: Engaging Students in the Assessment Process
Nick Curtis, Julie McDevitt, Robin Anderson
1:00 -4:00 PM, $60

Too often institutions invest in the assessment of student learning with too little return on investment in relation to learning improvement. While improvement of learning has become a greater focus of the assessment process, key stakeholders such as students do not have a seat at the table. The purpose of this workshop is to assist participants in engaging students in the assessment process so as to enhance learning improvement efforts. Participants will learn how to recruit students as well as engage students in a way that brings new insights into students’ educational experience and elevates their voice in the learning improvement process. Participants will develop strategies for partnering with students to reduce resistance to assessment among other key stakeholders (e.g. faculty and administrators).

Participant Learning Outcomes. By the end of the workshop, participants will

 

a. Draw on basic theory related to student engagement and its impact on both the student and the learning environment;
b. Discuss current/existing examples of how students are being engaged at various institutions;
c. Identify the potential for student engagement at each of the steps along the assessment cycle;
d. Create opportunities for and recruit students into assessment partnerships at their own institution.

Level:Intermediate

Target Audience.

a. Assessment specialists who have a working knowledge of the assessment cycle and who want to increase the use of assessment results at their institution for student learning improvement.
b. Faculty who participate in the assessment process at their institution and who want to work with students on improving student learning.
c. Administrators and other higher education stakeholders who seek to improve student learning and wish to engage with students in order to ensure the student perspective is included in improvement initiatives.

Facilitators
Nick Curtis serves as the senior learning outcomes consultant in the Center for Assessment and Research Studies at James Madison University. In this role, he works with programs, both at JMU and other institutions, seeking to measure the impact of their educational programming on students. Specifically, he works with programs to develop clear and measurable learning objectives, to develop coherent program theories, to identify appropriate assessment instruments, to plan well-designed studies, to assist in statistically analyzing the data, to help programs communicate results, and, most importantly, to help programs apply results to inform meaningful change. His research areas include student partnership in higher education assessment, validity in higher education assessment, and student learning improvement/innovation in higher education.

Julie McDevitt coordinates academic program and general education assessment at Palo Alto College in San Antonio, TX, as the Coordinator of Measurement & Evaluation. Ms. McDevitt worked for 12 years at the secondary school level as a Spanish and ESL Teacher, Instructional Specialist, and Testing Coordinator, and spent one summer training new teachers for Teach For America. For five years she coordinated a program at The University of Texas at Austin for immigrant high school students from Mexico. Her professional interests include best practices in higher education assessment, faculty engagement with assessment, and including students in assessment discussions. Ms. McDevitt is currently a student in JMU’s Higher Education Specialist Online Graduate Certificate program.

Dr. Robin Anderson is a Professor and Academic Unit Head for the Department of Graduate Psychology at James Madison University where she teaches Assessment Consultation, Research Methods, and Assessment and Public Policy. She also serves as Program Director for JMU’s Higher Education Specialist Online Graduate Certificate program. Her research areas include validity in higher education assessment and engineering educational research. In addition, Dr. Anderson is currently the Senior Associate Editor for the journal Research and Practice in Assessment (RPA). Previously, Dr. Anderson served as the Associate Director of JMU’s Center for Assessment and Research Studies and spent six-years as an assessment professional within the Virginia Community College System.

Back to top

Share: