I was recently invited to participate in a panel discussion on artificial intelligence in education (AIED) at the Training and Technology Summit organized by EducationScape. In preparing for that conversation, I took some time to ponder my recent reading on AI. What follows are my musings on both opportunities and risks. We are still at the beginning of this major period of transformation in the world of education and beyond, and it seems clear that AI offers the possibility of wide-ranging, substantive, positive impact on learning and teaching. At the same time, we would do well to anticipate potential drawbacks for, as Roy Amara noted, “We tend to overestimate the effect of technology in the short run, and underestimate the effect in the long run” (cited in Fadel et al., 2024, p. 34).
In terms of the student experience, there is the potential to personalize learning, provide adaptive instruction, tailored feedback, and virtual tutoring, all of which may allow students to progress more effectively on their personal learning journey (Cope et al., 2020; Shelton &Lanier, 2024; Sidorkin, 2024). For teachers, AI tools may reduce the time needed for administrative tasks, thus freeing them for more creative thinking about their practice and for pastoral care.
For society at large, AI enables us to
All of which is possible if we keep our focus on prioritizing inclusion, equity, and social justice in developing AIED. However, these potential benefits require a complete re-evaluation of our approach to learning, teaching, and assessment as well as a rethinking of what we believe the purpose of education to be.
A number of things seem to be getting in the way of progress such as inadequate teacher preparation. As Doug Newton wisely points out, we would be wrong to think that with AI “teachers will need less training, less expertise, and less knowledge. Instead they need more preparation with deeper educational knowledge and more skills - but not necessarily the same knowledge and skills as before” (Newton, 2022, p. 114).
Moreover, our assessment systems remain stubbornly traditional and are becoming increasingly irrelevant in an AI world. Meanwhile, we see contradictory policies in many schools. For example, AI tools may be allowed for teachers but are restricted for students. This creates confusion and often reflects a misplaced focus on “cheating” and “plagiarism” when we probably need to be thinking very differently about how to completely redefine concepts such as authorship, collaboration, and creativity in a poststructural and posthuman world (Peters et al., 2023).
There's also a disconnect between AI developers and educators. Not always, but in some cases, the companies creating these tools lack a background in education, pedagogy, and the science of learning which may result in products that don't necessarily align with classroom realities or priorities for learning and can be a drain on already tightly stretched financial resources for some schools (Buckingham & Luckin, 2019; Eynon &Young, 2020).
Finally, I'm deeply concerned about issues of equity. The digital divide threatens to create a two-tiered system with AI-enhanced learning for some students and traditional approaches for others.
So, despite the potential for positive transformation, there are any number of additional risks such as:
That is a pretty terrifying list, and it could likely include other concerns. And yet if, in contrast, educational leaders approach AI too hesitantly, there are some very different risks. As noted above, we might end up creating an AI literacy gap that leaves both teachers and students ill-equipped to navigate increasingly AI-integrated societies. Without clear understanding of AI capabilities and limitations, students may graduate unprepared for the complex technological landscape they'll encounter beyond formal education. This isn't merely about workforce preparation. In fact, I absolutely resist the idea of an instrumentalist view of education as preparing children first and foremost for the world of work. What I am referring to is enabling informed citizenship in a technologically mediated world (Shelton & Lanier, 2024).
Rather than layering new technologies over existing practices, that is to say using powerful AI tools simply to reinforce traditional pedagogical approaches, we could reimagine what learning and teaching could become if we were to move forward on the long called-for development of transdisciplinary, competency-based approaches and mastery learning.
So, what do we need to do to engage positively with AI in education? Teacher preparation has to be our first priority. We need educators who aren't just AI users but discerning consumers who can select and implement appropriate tools for specific purposes. This preparation should be guided by frameworks developed by educators themselves, not industry. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has created excellent foundations that we can build upon.
I'm particularly drawn to Daniel Schiff's distinction between AI in education versus AI for education (Schiff, 2022). We should develop tools serving holistic educational aims rather than narrowly preparing students for the workforce. This requires partnerships ensuring equitable access, meaningful student input, and ongoing evaluation of impact. The goal should be harnessing innovation while preserving education's deeper purpose of developing well-rounded humans who are ready for thoughtful, critical engagement with a complex world.
Rigorous and methodologically sound research should guide us in moving forward and this can take place on several levels, starting with small-scale classroom and school-based research. A couple of months ago The Bridge at Wellington College distributed a survey to teachers. Although the sample size is small, it does reflect a mixed population of both primary and secondary school teachers from state and independent schools in the United Kingdom as well as internationally. The survey addressed questions about teachers’ longevity in the profession, the frequency and nature of their use of AI and, most importantly, their views on data security, bias, and ethical issues related AI in education.
Many of the concerns they expressed are, in my opinion, justified and not in the least surprising. Quite a few teachers expressed concerns about academic integrity and plagiarism which, again, I believe indicates a need to reconsider what concepts like authorship, creativity, and collaboration even mean in an AI world.
Teachers worry about deskilling and dependency (for both students and themselves) and a potential diminishing of the creative thinking and independence of mind we should be fostering. Some expressed concern about equity, data security, and environmental impacts.
Finally, teachers want guidance on how to:
Such concerns reflect thoughtful engagement with AI's implications - not resistance to change, but legitimate questions about navigating this transition responsibly.
In order to move forward purposefully and ethically we need to engage in collaborative research at multiple levels. In classrooms, we should support teachers as practitioner-researchers evaluating specific AI tools against clear learning objectives. At the institutional level, we need frameworks evaluating impact on teaching effectiveness and learning outcomes. Finally, strategic research partnerships with universities are essential for investigating cognitive impacts, pedagogy, equity implications, and cultural shifts among other issues. Such a multi-tiered approach to research requires transparency and knowledge-sharing across institutions. By gathering evidence systematically at these different levels, we can make decisions that leverage AI's potential while staying true to core educational values. The alternative, adoption without evidence, risks turning our schools into spaces for undisciplined trial and error rather than thoughtful learning environments.
What might a great classroom look like in 2030? One often hears educators trying to project into the future. It may very well be that the physical space of the learning environment will change. It’s probably unwise to suggest what new hardware we might find. Perhaps we will see virtual reality (VR) headsets or robot classroom assistants, for example. But this is worthy of a longer discussion in and of itself.
Personally, I would prefer to think about pedagogical approaches and, most importantly, a reimagining of the curriculum. Much of my thinking comes from UNESCO’s Reimagining our Futures Together: A New Social Contract for Education (2021) and the UNESCO IBE Competency Framework (2017) with its focus on the global competencies of lifelong learning, self-agency, interactively using diverse tools and resources, interacting with others, interacting with the world, multi-literateness (including digital, cultural, financial, health, and media literacies) and transdisciplinarity.
Just as today, a great classroom in 2030 should be defined not by the technology one finds in it, but by how that technology supports human-centered, equitable learning. It is evident that AI has a prominent place in education for the future, but this should be only in order to support and facilitate a deeper and much needed transformation toward systems of education that are fully inclusive, responsive, future-facing (not future-proof), and profoundly human. It will ideally be a space where collaboration, creativity, curiosity, and critical reflection are central to the learning process and where AI tools are used to enhance learning, to facilitate feedback, and for collaborative inquiry. Assessment will be authentic, formative, continuous, and personalized, making use of AI tools to track mastery and to support students along individual learning pathways (Cope et al., 2020). At the same time, we will have preserved the teacher's role in making ethical, holistic judgments.
Many of us hope that by 2030 the curriculum will have evolved such that, instead of traditional direct instruction, content delivery and summative assessment, we will see competency-based, transdisciplinary learning, grounded in real-world challenges and systems thinking. The classroom of 2030 will foster what UNESCO calls “learning to become,” thinking about “how knowledge and learning can shape the future of humanity and the planet.” Students will develop critical AI literacy, not just how to use AI tools, but how to question, critique, and ethically engage with them (Peters et al., 2023). They will practice multi-literateness, communicating across platforms, disciplines, and cultures. The focus will be on agency.
Teachers will be empowered to use AI with discernment, to reduce administrative load yes, but more importantly, to design and guide rich learning experiences (Newton, 2022). They will be supported by ongoing professional development that is grounded in educational values and collaborative inquiry, not tech hype. Teachers will be recognized for what they have always been, knowledge producers and key stakeholders in transforming education. Collaboration among practitioners, research, their contribution to the development of new pedagogical practices, and critical reflexivity will be intrinsic to teachers’ work.
In the 2030 classroom I envision, AI will be used to support linguistic and cultural diversity, enabling students to learn in ways that honor and draw from their identities and experiences. This will be a space that privileges the human elements of education, one in which AI will be used to enhance that which makes education profoundly human: connection, creativity, critical thinking, and empathy. We will have succeeded when AI is used to help us reimagine what education can become when equity, purpose, and imagination are at the center.
[1] Freiberg (2024) reference to Bernard Stiegler’s use of the term ‘proletarianization,’ the loss of knowledge and skills through automation, is pertinent here.
References
Buckingham Shum, S.J. and Luckin, R. (2019), Learning analytics and AI: Politics, pedagogy and practices. British Journal of Educational Technology, 50: 2785-2793. https://doi.org/10.1111/bjet.12880.
Cope, B., Kalantzis, M., & Searsmith, D. (2020). Artificial intelligence for education: Knowledge and its assessment in AI-enabled learning ecologies. Educational Philosophy and Theory, 53(12), 1229–1245. https://doi.org/10.1080/00131857.2020.1728732.
Delgado, R. (2023, July 11). The risk of losing unique voices: What is the impact of AI on writing? Forbes. https://www.forbes.com/councils/fornesbusinesscouncil/2023/07/11/the-risk-of-losing-unique-voices-what-is-the-impact-of-ai-on-writing.
Eynon, R., & Young, E. (2020). Methodology, Legend, and Rhetoric: The Constructions of AI by Academia, Industry, and Policy Groups for Lifelong Learning. Science, Technology, & Human Values, 46(1), 166-191. https://doi.org/10.1177/0162243920906475 (Original work published 2021).
Fadel, C., Black, A., Taylor, R., Slesinski, J and Dunn, K. (2024). Education for the Age of AI. Center for Curriculum Redesign.
Freiberg, C. (2024). Generative AI and the necessity of an existential crisis for the liberal arts. Educational Philosophy and Theory, 56(14), 1428–1438. https://doi.org/10.1080/00131857.2024.2409744.
Gillani, N., Eynon, R., Chiabaut, C., & Finkel, K. (2023). Unpacking the "Black Box" of AI in Education. Educational Technology & Society, 26(1), 99+. http://dx.doi.org.ezphost.dur.ac.uk/10.30191/ETS.202301_26(1).0008.
Gourlay, L., "Generative Ais, more-than-human authorship, and Husserl's phenomenological 'horizons'. Proceedings of the Fourteenth International Conference on Networked Learning 2024.
Holmes, W., Porayska-Pomsta, K., Holstein, K. et al. Ethics of AI in Education: Towards a Community-Wide Framework. International Journal of Artificial Intelligence in Education 32, 504–526 (2022). https://doi.org/10.1007/s40593-021-00239-1.
International Commission on the Futures of Education. (2021). Reimagining our futures together: A new social contract for education. UNESCO.
Kong, S. C., Korte, S. M., Burton, S., Keskitalo, P., Turunen, T., Smith, D., … Beaton, M. C. (2024). Artificial Intelligence (AI) literacy – an argument for AI literacy in education. Innovations in Education and Teaching International, 1–7. https://doi.org/10.1080/14703297.2024.2332744.
Marope, M., Griffin, P., & Gallagher, C. (2017). Future competences and the future of curriculum: A global reference for curricula transformation. UNESCO-IBE.
Milberg, T. (2024). The Future of Learning: How AI is revolutionizing education 4.0. World Economic Forum. https://www.weforum.org/stories/2024/04/future-learning-ai-revolutionizing-education-4-0/.
Mulvihill, T. M., & Martin, L. E. (2024). Voices in Education: Artificial Intelligence (AI) and Teacher Education: What Key Points Do Teacher Educators and Policy Makers Need to Consider Related to AI? The Teacher Educator, 59(3), 279–281. https://doi.org/10.1080/08878730.2024.2353441.
Newton, D. P. (2022). Living with Digital Teachers: AI, the classroom, and the future. ICIE.
Perrotta, C., & Selwyn, N. (2019). Deep learning goes to school: toward a relational understanding of AI in education. Learning, Media and Technology, 45(3), 251–269. https://doi.org/10.1080/17439884.2020.1686017.
Peters, M. A., Jackson, L., Papastephanou, M., Jandric, P., Lazaroiu, G., Evers, C. W., Fuller, S. (2023). AI and the future of humanity: ChatGPT-4, philosophy and education – Critical responses. Educational Philosophy and Theory, 56(9), 828–862. https://doi.org/10.1080/00131857.2023.2213437.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending various regulations and directives. Official Journal of the European Union, L 1689, 12 July 2024.
Reiss, M.J. (2021) ‘The use of AI in education: Practicalities and ethical considerations’. London Review of Education, 19 (1), 5, 1–14. https://doi.org/10.14324/LRE.19.1.05.
Schiff, D. Education for AI, not AI for Education: The Role of Education and Ethics in National AI Policy Strategies. International Journal of Artificial Intelligence in Education 32, 527–563 (2022). https://doi.org/10.1007/s40593-021-00270-2.
Shaping the Future of Learning: The Role of AI in Education 4.0. Insight report. (2024). World Economic Forum.
Sidorkin, A. M. (2024). Artificial intelligence: Why is it our problem? Educational Philosophy and Theory, 1–6. https://doi.org/10.1080/00131857.2024.2348810.
Shelton, K & Lanier, D. (2024). The Promises and Perils of AI in Education. Lanier Learning.
Stolz, S. A., Winterburn, A. L., & Palmer, E. (2024). Is learning with ChatGPT really learning? Educational Philosophy and Theory, 56(12), 1253–1264. https://doi.org/10.1080/00131857.2024.2376641.
Yang, S.J.H., Ogata, H., Matsui, T., & Chen, N.S. (2021). Human-centered artificial intelligence in education: Seeing the invisible through the visible. Computers and Education: Artificial Intelligence, 2, 100008. https://doi-org.exphost.dur.ac.uk/10.1016/j.caeai.2021.10008.
Karen Taylor serves as Head of Educational Research for The Bridge at Wellington College and Associate Professor in Practice at Durham University’s School of Education. Prior to arriving in the United Kingdom, Karen was Director of Education and the Institute of Learning and Teaching at the International School of Geneva. Over the course of her career, she has taught or worked in a range of well-respected institutions of higher learning and secondary education in the United States and in Europe. Her research interests include eighteenth-century French pedagogical writings, global citizenship education, intercultural learning, inclusion, and plurilingual education.