“Embracing an untested tool like AI without the proper ethical questions and more meaningful, iterative reflection will open Pandora’s box,” explains award-winning educator and researcher Rebecca J. Blankenship.

We now find ourselves in a questionable situation with the precipitous acquiescence that AI, specifically genAI, is the technical panacea for all of our educational quandaries. After all, that which is within our technical capability and potential is not necessarily within our ethical boundaries. The machine’s potential must be tempered by the humanology of our ethical principles.
As educators and educational professionals, we find ourselves constantly inundated with the latest trends in teaching methods and learning modalities. Each new technique or emerging technology comes with the typical promises that this new approach or that new tool will revolutionize instruction and help strengthen student learning outcomes. School districts and educator preparation programs, eager to be considered advanced, competitive, and progressive, often forgo measured, rational vetting processes in favor of rapid adoption and immediate implementation. This hasty adoption method is heuristic, meaning that the information used to determine if integrating AI into traditional teaching and learning practices has long-term, positive benefits is primarily based on what it can do in the immediate. Thus, the question emerges of what evidence do we have that the heuristic acceptance of emerging technologies like AI can result in an adverse outcome on teaching effectiveness and learning gains?
We don’t have to look too far in the current past for examples of the negative effects of hastily and heuristically implementing new technologies into traditional human-based teaching and learning spaces. The most recent case in point is the cautionary tale of smartphones. When mobile phones experienced increased popularity in the late 1990s and early 2000s, they were initially a novelty used to make calls on the go, send short messages, and play games. Initially, their appearance in classrooms was approached by educators as more of a nuisance and was managed as more of a disciplinary issue than as a potential learning tool. The development of smartphones in the early 2000s decisively changed how mobile phones were used to augment or supplement conventional learning tools.
Now, rather than being perceived as an annoyance, tech companies were quickly introducing all types of apps and interactive tools that teachers and students could use to complete a range of tasks, and they were embraced quickly and with much enthusiasm. While there were certainly early projections of the transformational promise smartphones could bring to conventional learning modalities, those promises were tempered as we slowly came to the realization that deeper learning had devolved into cognitive laziness. There was such a rush to use all of the most up-to-date phones and apps that there was little deliberation, no intentional guardrails, and no purposeful frameworks put in place. The heuristic euphoria of using the shiny, new apps on the latest iPhone completely overshadowed the needed value-driven (ethos) structures of accountability, human autonomy, and transparency.
The lack of pause, questioning, and reflection plunged teachers and students into a sort of technical inertia that created a cognitive, emotional, and social disequilibrium, unsure of the long-term implications of their heuristic naïveté. Thus, we seem to be left in a perceptive stasis, undecided about how to responsibly move forward with new technologies while still ambiguous about the lingering effects of impulsively adopting smartphones.
We now find ourselves in a similar situation with the precipitous acquiescence that AI, specifically genAI, is the technical panacea for all of our educational quandaries. That somehow relinquishing partial or complete human agency to an automaton is the next logical advancement in teaching and learning. Here, we must then ask ourselves if we truly learned our proverbial lesson from the swift descent into the smartphone abyss. After all, the questions about the long-term cognitive, emotional, and social effects are still the active and ongoing subjects of cultural, educational, psychological, and social research, just to name a few.
Embracing an untested tool like AI without the proper ethical questions and more meaningful, iterative reflection will open another Pandora’s box. Being innovative does not absolve us of our responsibility to question the efficacy of AI in teaching and learning spaces, before we precipitately embrace it, which may lead to unintended consequences in the name of educational and technological advancements. Without sufficient inquiry, we are destined to again mistake innovative convenience for consequence-free evolvement.
We understand that AI in educational contexts is here to stay, and the question then becomes not one of whether we can integrate it into traditional human-to-human teaching and learning experiences and modalities. The question becomes whether we can integrate it into our educational spaces ethically, intentionally, and wisely, while still maintaining human worth through technical innovation.
About Rebecca J. Blankenship: Rebecca is an award-winning educator and researcher with over 25 years of teaching experience. Her current research examines the ecologies of meanings as a systems-based, hermeneutic approach to ethics in AI and gen-AI teaching and learning modalities. She is currently an Associate Professor in the College of Education at Florida Agricultural and Mechanical University.