Print Page | Contact Us | Report Abuse | Sign In | Become a Member of AALHE
Share |

Transformative Learning Needed for Higher Education Assessment

August 30, 2017

David Kirk Dirlam[1]

            Learning can either be incremental or transformative. The former has been studied for a century and a half. It occurs gradually through practice and for the most part obeys “laws of learning” established in tens of thousands of articles. It has led to assessments based on rating scales, with numbers like those used by or with adjectives that form SWELL rubrics (Sequences Which Expand Little by Little). Transformative learning, on the other hand, was first carefully described by Jack Mezirow only a generation ago. Based on a 500-session study, some colleagues and I (see Dirlam, 2017) found that Mezirow’s (1991) 10 phases fit into four time periods: Disorientation, Examination, Enabling, and Performing. These, we called the DEEP modes of commitment. The resulting transformation involves a deep shift in perspective leading to a more open, permeable, complex, sustainable, and better-justified meaning-perspective (c.f., Taylor and Cranton, 2012).

Transformative Learning for Individuals

            To understand how transformative learning relates to higher education assessment, in general and AALHE in particular, we must start with how transformative learning in individuals relates to developmental rubrics. Then we can consider how it works in development beyond the person.

The Theory Behind Developmental Rubrics

            The basic idea of developmental rubrics is that there is a transformation between each of four modes of practice: beginning, exploring, sustaining, and inspiring. First beginning modes are transformed into exploring modes. Beginners take just a few minutes to try an activity. To explore they need not just more of what they did but a whole new mode of practice. When children begin to draw their first person, they scribble. Exploring drawings use stick people. Beginning collaborators are reticent. Explorers assert themselves. Beginning writers tell about themselves. Explorers correspond with a friend.

            After several months of exploring, some students begin to experiment with yet another whole new mode. This time, the goal is to devote a few years to getting good enough at the mode of practice to sustain it, especially in a professional or work context. Drawings look like folk art. Collaborators take on roles based on each other’s skills. Writers address small groups of known people.

            A decade later a few people work to make yet a third transformation. Now, the inspiring goal is to discover, innovate, or establish new interpretations that are broadly copied. Such  changes are transformative rather than incremental.

            Transformative changes are due to three fundamental characteristics of modes of practice: growth rate, competitive strength, and resource level. Beginning practices do not grow and do not compete with other practices. Exploring modes of practice grow very fast, but also do not compete with more advanced modes. If learners fail to acquire the advanced modes, their exploring modes consume so many resources that they may abandon the practice altogether. Sustaining practices grow a little slower but are more competitive. Inspiring practices take a long time to establish, but when they get established, they are the most competitive of all. Once a person starts making discoveries, innovations, or new interpretations, it is so exciting that they do not want to revert back even to sustaining work. The salient transformations for higher education are beginning (first day of introductory course), exploring (lower division or associates degree courses), sustaining (upper level course), or inspiring (graduate courses). This theory is developed in detail in Dirlam (2017). For developmental rubrics, each dimension has four modes and each transformation between modes requires the DEEP modes of commitments.

Examples of Incremental and Development Rubrics

            Many people use AAC&U’s Value Rubrics, which have helped to move academic assessment toward multidimensional thinking that becomes interred in the simple minded grade. But incremental rubrics miss the opportunity to stimulate transformative learning.

            Comparing a dimension from the Value Rubrics for writing to one created with transformative learning in mind reveals how assessment can address either incremental or transformative learning. The first example is from the AAC&U Value Rubrics and the second from a group of faculty involved in a writing across the curriculum program at Wilmington College. That faculty had been trained in using cascading developmental interviews[2] to create rubrics.

            AAC&U Value Rubrics. The “Goal-Oriented Organization” dimension from AAC&U Value Rubrics are primarily incremental. Instructors could use them to encourage students to do more of something (e.g., pay attention to the context and purpose), but they do not suggest how to transform their practices.

            Context of and Purpose for Writing. This includes considerations of audience, purpose, and the circumstances surrounding the writing tasks:

·       Capstone 4. Demonstrates a thorough understanding of context, audience, and purpose that is responsive to the assigned task(s) and focuses all elements of the work.

·       Milestone 3. Demonstrates adequate consideration of context, audience, and purpose and a clear focus on the assigned task(s) (e.g., the task aligns with audience, purpose, and context).

·       Milestone 2. Demonstrates awareness of context, audience, purpose, and to the assigned tasks(s) (e.g., begins to show awareness of audience's perceptions and assumptions).

·       Benchmark 1. Demonstrates minimal attention to context, audience, purpose, and to the assigned tasks(s) (e.g., expectation of instructor or self as audience).

            Wilmington college writing across the curriculum. Rubrics designed for transformation are developmental and contain ideas that instructors could use to motivate change. In the Wilmington College Writing Across the Curriculum dimension below, it’s easy to imagine an instructor saying to a student that they have captured the topic, but now they might start thinking about what they want to accomplish in each part of the paper. This would be a whole new practice for the student, not just more of a continuing one. It’s also easy to imagine a student thinking, “It never occurred to me that I have to tell the readers how every section relates to the purpose.”

            Goal-Oriented Organization. This concerns what the assignment accomplishes:

·       Beginning: DISCONNECTED: States a topic but no particular goal. If a thesis or goal is evident, it may not connect to body. Little to no logical progression of ideas or conclusion.

·       Exploring: Topic-driven: Assignment is topic-driven rather than goal-driven, and assignment goal is not present throughout body.

·       Sustaining: PLANNED: Fulfills objective through a logical presentation of evidence. Knows the function of each part of the paper in relation to assignment goal.

·       Inspiring: Authoritative: Persuades the reader while remaining grounded in an objective discussion of evidence. Work is cohesive and offers new insight.

            To implement the new practice, the student would need to examine their writing practices by reflecting, assessing their own thinking, and talking with others. They would also have to plan how to connect each section of their writing to their overall purpose and rehearse it by doing it over and over. Experienced teachers intuitively know how to start their students on a new mode of practice with a dilemma, help them examine it, enable them to use it, and support their performance. The DEEP modes of commitment play over and over in each transformation between modes within every dimension of a field of expertise. The developmental rubrics, and even their one-word titles, help teachers plan more systematically and communicate more easily about the transformations.

Transformative Learning Beyond the Person

            The writing rubrics help to distinguish incremental from transformative learning. But the distinction goes far beyond writing. Over 300 faculty from over 50 disciplines found it easy to describe half or even a whole dozen of dimensions of transformations first from beginning to exploring, then to sustaining, and ultimately to inspiring modes of practice. Besides the educational use, there is another application of transformative learning that is as far reaching and thought provoking as its influence on student development. Communities and organizations widely dispersed across time and space also transform.

Theory of Transformative Learning in Communities

            In 1999, Dirlam, Gamble, and Lloyd rated over 900 articles written from 1930 to 1992 and randomly selected from Child Development and Developmental Psychology. They found that historical development of the research practices followed exactly the same pattern as individual development. It also depended on growth rate, competitive strength, and resource level. Because of this remarkable similarity, we can expect that transformative learning applies to historical changes in communities of people across decades as well as it applies to individuals over a few months or years.

Example of Transformative Learning in Higher Education Assessment

            Because the same dynamics work in individual as in historical development, we can expect that the transformative sequence would work as well. Can we use the DEEP modes of commitment to understand how assessment should change? For example, we are still exploring how to use the literature on learning in higher education assessment. So a good question for this example becomes, how might we use the DEEP modes of commitment to transform assessment into a more sustainable mode of using the literature on learning.

            Disorienting dilemma. Higher education is falling behind. Industry after industry has entered the Age of Intelligence. They are doing deep analyses of massive datasets. Right now, however, higher education has no way even to share stated program outcomes from multiple institutions. Instead, we rely on opinion leaders and cherry picked articles—the same strategies that people use to undermine global warming and conservation efforts. People have begun to argue that there has been no progress in assessment in the last 20 years. If so, they claim that our field has not learned in either way. As long as we rely on opinion leaders and cherry picked articles, we can expect even worse political undermining than global warming and conservation have suffered.

            Examining. Accreditation agencies define standards statements without documentation of the massive social science literature on learning and teaching. Programs often define outcomes without so much as an analysis of journal names in their fields. Regardless of these weaknesses, there is no way even to access an unbiased collection of the outcomes for any field.

            Individuals and associations are proclaiming lists of a few handfuls of “high impact practices.” These are based on a study that included less than one-millionth of the possible course designs that would result from even a simple analysis. One such analysis used a combination of 5 options for each of 6 dimensions (locations, instructor roles, social contexts, preparation expected, resource required, evaluation basis) that were used for one of 5 durations (none, day, week, month or daily). That results in nearly a quarter billion patterns. That a few handful of practices should be proclaimed for all occasions reveals the absurdity of relying on the best-marketed practices.

            Enabling. There are a few hopeful signs. Peggy Maki (2017) has published a book calling for real-time assessment. We have known for more than a half century that longer delays of feedback produce less learning. The typical “close-the-loop” delays the feedback so much that no current student benefits from it. It becomes an autopsy of the learning it claims to assess. Just becoming aware of the need for real-time assessment is progress. But the progress is empty unless instructors assess their students in real time.  Jack Mezirow’s wonderful analysis of transformative learning is becoming better known. But progress is likewise empty unless instructors use it to inspire individual students.

            In small classes transformative use of developmental rubrics happens spontaneously as soon as faculty have used the rubrics often enough to remember them. For large classes, Rachel Yoho has developed a fascinating machine-learning approach for providing real-time assessment. She created a set of assignment-related rubrics, had faculty use them to assess student papers and then, gave the graded papers one at a time to “train” the computer. She found that the program learned to assess as reliably as the humans, but could do so quickly enough for a large class to get feedback within minutes.

            Performing. Your AALHE Board has undertaken two initiatives that are designed to move higher education assessment into better use of the literature on learning. The first is the Knowledge Development Task Force, which reports to the President and will initially be chaired by Teresa Flateby and me. If you are an AALHE member and interested in joining this Task Force, please contact either of us at David Dirlam ( or Teresa Flateby (

            Its mission is to identify and facilitate ways to advance the development of a body of knowledge devoted to assessing and improving student learning in higher education. Some strategies include create a bibliography of knowledge development sources relevant to AALHE, identify key strategies from them, create one or more key databases. The members will work to identify advances in the last two decades including content analysis from library databases of disciplinary journals. Such analysis would involve (1) scientific methods for establishing improvements in student learning, (2) the design of assessment procedures, (3) the interpretation of assessment practices and results, (4) the academic leadership above assessment, (5) the leadership of assessment research, and (6) seeking to identify problems that could be solved in the next decade. Once problems are identified, it would be important to envision solutions, specify the resources needed for them, select solutions for proposing to AALHE board, and facilitate the implementation of the selected solutions.

            The other initiative is for an AALHE Database of Learning Identifiers. If this gets the go-ahead, we will seek to build the sort of massive database that will lead to real understanding of the kinds of learning that programs aim to create across the U.S. and perhaps even beyond.


            Transformative learning is not just for individual students. Organizations need it as well. Right now, higher education needs a transformation to more complex, open, permeable, sustainable, and better-justified approaches to understanding, assessing, and above all fostering learning. Use of the literature is one dimension. There are numerous others. AALHE and its Emerging Dialogues invite the dilemma recognition, examination, enabling, and performance of new modes of assessment practice.



Dirlam, D. K. (2017). Teachers, Learners, Modes of Practice: Theory and Methodology for Identifying Knowledge Development. New York, NY: Routledge.

Dirlam, D. K., Gamble, K. L., & Lloyd, H. S. (1999). Modeling historical development: Fitting a competing practices system to coded archival data. Nonlinear Dynamics, Psychology, and Life Sciences, 3, 93-111.

Maki, P. (2017). Real-time student assessment: Meeting the imperative for improved time to degree, closing the opportunity gap, and assuring student competencies for 21st-century needs. Sterling, VA: Stylus

Mezirow (1991). Transformative Dimensions of Adult Learning. San Francisco: Jossey-Bass.

Taylor, E. W. and Cranton, P. (2012). The handbook of transformative learning. San Francisco: Jossey-Bass.

[1] I am grateful to Jane Souza for an insightful discussion of an early draft of this Emerging Dialogues contribution.

[2] The cascading process begins with a group meeting where an experienced developmental interviewer conducts one interview and a group member does another. The group finishes the interviews in pairs on their own; they combine the results into one set of rubrics; and after sufficient use they meet to refine definitions.

Connect With Us

Association for the Assessment of Learning in Higher Education
2901 Richmond Road, Suite 130-318
Lexington, Kentucky, USA 40509
Phone: 859-388-0855