Transformative Learning using AI-Mediated Pedagogy

We’ve all experienced a situation in which we are scrolling through the socials, talking to a colleague, or watching a show, when we realize something is not sitting right. Whether it is a comment, an image, or a sound, there is a rational (logos) contradiction to our existing beliefs (pathos) that puts us off balance. There is an uncomfortable feeling that something doesn’t quite make sense, even to the point of questioning what we have always believed. This phenomenon is what psychologists denote as cognitive dissonance. It is the psychological discomfort a person feels when there is a presence of two or more conflicting beliefs or when beliefs conflict with behaviors. In the age of AI-enhanced education, teachers and learners are finding themselves in cognitively disorienting dilemmas, resulting in a thought disconnect between existing assumptions and the transformations of how we view an AI-mediated world.
What happens in traditional human-to-human learning when moments of cognitive dissonance do not originate from a person? What if the source of the dissonance is a machine, a chatbot designed to mimic human responses? How does the learner resolve their perceptive disconnect between human and algorithmic interactions? Jack Mezirow (1991), a leading adult learning theorist, posited that learning not only must take into consideration acquiring new knowledge but also transforming a learner’s existing way of knowing or frame of reference. In order for cognitive transformation to happen, deep-seated assumptions, beliefs, and core values must be recontextualized within an experience or situation that directly conflicts with an existing worldview. Mezirow characterized these movements as “disorienting dilemmas” and are necessary cognitive clashes for deeper transformation to occur. In other words, it is an alteration of a person’s accepted frame of reference.
Mezirow’s work can be seen as an extension of Leon Festinger’s (1957) classic theory of cognitive dissonance. Festinger noted that humans crave consistency, and when there is a contradiction or mismatch, it results in psychological discomfort or disconnect, causing internal mental tension. Typically, we associate these mismatches during human interactions, when there is a breakdown in register between teacher and learner, disrupting communication, resulting in conversational collapse and cognitive dislocation. As AI becomes increasingly woven into traditional human learning spaces, learners will experience cognitive dislocations as adaptive platforms challenge a learner’s reasoning with instant counterarguments and contradictions. Large language models (LLMs) that support AI tutoring programs are exceptionally adept at generating cognitive dissonance as their algorithms create divergent perspectives on demand to challenge conventional thinking. From this perspective, AI teaching modalities could be considered a productive discomfort.
Of course, with the implementation of any disruptive technology, there are potential opportunities and considerations of risk. Cognitive dissonance can be viewed as discomforting but necessary to compel learners to look beyond more simplistic assumptions.
If ethically and responsibly used, AI can advance a Mezirow-style cognitive transformation, resulting in a learner’s deeper understanding of meaning-making in digital modalities. If careful preparation and intentionality are not considered in instructional planning and learning outcomes, discomfort becomes outright rejection, leading to what Vygotsky (1978) termed “microgenetic regression.” There must be a proper balance to safeguard that AI creates prudent dissonance, to ensure that students can be cognitively challenged through contradictions while still emerging from AI interactions with deeper and transformational understanding. Because of AI’s unusual immediacy in generating responses, learners may mistake a quick response for accuracy. Without proper scaffolding, this could lead to premature acceptance, which would neutralize dissonance, stagnate reflection, and inhibit cognitive growth. While AI-mediated learning does provide an entry point for disorienting dilemmas, true transformation still requires critical reflection and rational discourse, which are within the purview of human dialogic interactions.
What we know about the early implementation of AI into conventional human-to-human learning spaces is that it has the capacity to be a true provocateur of cognitive dissonance and disorienting dilemmas. Thus, there is the potential for AI-mediated education to bifurcate along two paths; 1. Surface or superficial resolution; or 2. Deep, long-term transformation. It is imperative that intentional pedagogic ecologies are used to frame instruction and scaffold learning so that AI functions simultaneously as a mirror (reflecting preconceived ideals) and a dissonant provocateur (destabilizing those same ideals). The locus of these ecologies is in confirming that human facilitation remains the center of AI learning spaces. As more educational entities and stakeholders increasingly cede educational practices to AI, the educator’s task then becomes one of a human-machine collaborator. Their responsibility lies in not shielding learners from the dissonance, but, rather, ensuring that cognitive discomfort is used productively. While there is an initial unsettling in the dissonance, with intentional scaffolding, true transformational learning can occur.

Rebecca Blankenship
Rebecca J. Blankenship is an award-winning educator and researcher with over 25 years of teaching experience. Her current research examines the ecologies of meanings as a systems-based, hermeneutic approach to ethics in AI and gen-AI teaching and learning modalities. She is currently an Associate Professor in the College of Education at Florida Agricultural and Mechanical University.