“Can I ask a question outside the simulation? Would Rawls be down with us using AI [artificial intelligence] to work on this?”
And just like that, one of my most curious students derailed the next 30 minutes of class—right in the middle of a summative assessment. What was meant to be a focused deliberation turned into a sprawling, heated discussion on political philosophy, ethical use of technology, and worries about the future, all sparked by a precocious 10th grader.
We were deep into a Structured Academic Controversy (SAC) during the culminating, student-designed simulation: a fictional United Nations (UN) summit set in 2030, amid an imagined third term of an isolationist administration. Students had assumed disciplinary roles—geographer, economist, psychologist, business analyst, political scientist, and historian—and were grappling with a question that was hypothetical but all too real for a group of international students in Ethiopia: how do we protect the Sustainable Development Goals in a world pulling back from global cooperation?
In that moment, this student looked pointedly at me, then at her classmates. They glanced down at their AI-generated notes, quietly buffering. Part of me groaned inside—an angel on one shoulder warned that this was a collaborative summative on the verge of going completely off-road depending on how I played it. The devil on the other shoulder nudged me—this was a chance to embrace the messy flow of an inquiry-driven classroom. I gave in and asked the class, “So, what do we think? Would Rawls have approved?” And away we went.
This spontaneous line of inquiry, and the months of work leading up to it, framed the final unit in our Individuals & Societies course. Throughout the year, students had been walking with four philosophers as thought partners: Rawls, Locke, Aristotle, and Rousseau. I hadn’t added these figures as historical voices to memorize. We used them as ethical lenses—ways to interrogate power, equity, and agency. The original intent was to use their philosophical contributions to help students frame rationales for action related to the UN Sustainable Development Goals (SDGs). But as the year progressed, they began turning to those same tools to confront the ethical dimensions of living and learning alongside emerging challenges and opportunities of AI.
Keeping the Human in the Loop
From the outset, the unit was built around a human-centered approach to generative AI—treating students not just as users, but as the ethical and intellectual anchors of the process. To support this, we introduced the “PPP” framework—Persona, Problem, Parameters—adapted from UnconstrainEd, which helped students structure their interactions with large language models. As AI became integrated into our classroom workflow, the ethical necessity of this approach became increasingly clear.
I introduced the metaphor of the “monkey in the middle” to explain our role in working with AI—not just accepting its output, but staying alert, responsive, and deliberate in how we engage with it. I asked if they remembered the playground game. They nodded. “Now imagine,” I said, “AI is the ball. It’s being tossed back and forth above you while the temptation to slack off and procrastinate until the last second, letting AI do and think for you, keeps the ball out of your hands. Your job is to keep your eyes on the ball, clock your lunge, grab hold, and maintain possession.”
That’s when a voice interrupted, from the same litigator who’d eventually derail an entire summative with a single question, with a raised hand. “But if we’re the monkey,” she asked, “doesn’t that mean we’re not in control?” I teacher-panicked. Why was she trying to sweep the rug out from under my clever metaphor? “That’s why you have to win the game,” I instinctively bluffed. The class chuckled, and to my relief, nodded, accepting the explanation. So I rolled with it. And the image has stuck.
The power of that image—the monkey in the middle—is that it captures the mindset I hope to instill. It’s not enough to know how to prompt an AI tool or critique its output. What matters is cultivating the disposition to remain present, curious, and responsible within the decision loop. The metaphor helped students picture themselves at the heart of the action, not on the sidelines. And that mindset was essential to understanding their role in shaping, not just consuming, the flow of information, ideas, and arguments in collaboration with a machine.
Why the sense of urgency in the metaphor? AI and education thinker Eric Hudson argues that we shouldn’t be teaching AI skills just because students might need them someday, we should be engaging with AI because of what’s happening now. Students are already using these tools to draft papers, find shortcuts, and explore ideas. While job forecasts for 2030 are certainly relevant, our responsibility is to equip them with the mindset and practices they need to navigate AI ethically today. The classroom is already changing. The loop is already in motion. The task before us is to guide students in how to inhabit that loop with conviction, courage, and compassion.
Here’s what our learning revealed when we put that mindset into action:
1. The Monkey is the Ethical Anchor
In the AI-Human Decision Loop, the most important step wasn’t the prompting, it was the reckoning: confronting the ethical weight of their choices. During the SAC, students weren’t just arguing their positions as political scientists or geographers. They paused mid-discussion to question the ethical foundations of their arguments. One student, acting as a business analyst, stopped to ask if it was appropriate to use AI-generated data visualizations when other team members couldn’t verify them. Another, from the political science team, questioned whether their AI-generated summary of LGBTQ+ rights frameworks glossed over regional and cultural complexities. Had it stripped away nuance and emotional depth?
Rawls’s principles gave them a framework. His idea of the “veil of ignorance,” that justice emerges when we reason as if we didn’t know our own status or advantage, helped students think beyond themselves. In these discussions, AI wasn’t just a tool. It became a mirror. If a classmate didn’t know how to prompt as well, was it fair to rely heavily on AI-generated writing? Were they contributing to a system that helped or hindered equity?
2. Fluency Over Fire and Forget
In our class, AI use was always optional. Some students were skeptical at first; they didn’t want a robot doing their work for them or stealing their ability to think. And as they saw the intentionality and meaningfulness of the work we were doing, as they realized it wasn’t like the casual, unstructured “fire and forget” prompting they had witnessed among their peers, they began to see AI not as a threat to their thinking but as a way to strengthen it.
Fluency developed out of integrated practice. The Persona Problem Parameters protocol gave them structure, and throughout the unit, each product developed with the help of AI also had a corresponding “AI-proof” component. Students used AI to help draft cover letters and resumes as part of their applications to serve as experts at the fictional summit, but then sat for one-on-one interviews with me. They crafted position papers and briefing presentations using AI-supported research and outlining tools, then presented their ideas in disciplinary groups without notes and participated in small group discussions. Their disciplinary peers, the people most likely to know whether they understood the material, held them accountable.
Finally, the whole class came together to deliberate on the central question: how do we continue to support the Sustainable Development Goals in a world retreating from multilateral cooperation? Each step of this process helped clarify the role of AI. It was a support for real thinking, not a shortcut around it.
Interestingly, the students who most readily embraced AI weren’t necessarily those who excelled at traditional academic tasks. They were often the entrepreneurial thinkers—students who preferred tinkering, modeling, and pitching over composing essays or conducting source analysis. Their results weren’t always polished, but their process was intentional. They didn’t use AI to bypass tasks; they used it to test, adjust, and refine their ideas with instinctive precision. For students who often felt sidelined in conventional assessments, this approach showcased their strengths. They felt engaged, capable, and empowered in ways that traditional tasks rarely allowed.
3. Aristotle’s Vision: AI’s True Potential Lies Beyond Equal Access. It’s in Designing Your Own Flourishing
Control of the decision loop allowed for multiple means of engagement and expression, empowering students at all levels to become the universal designers of their own learning. With growing fluency in AI, they gained agency to shape learning experiences in ways that made content more accessible and meaningful. Some used AI to understand rubrics in their home languages—French, Korean, Amharic—ensuring clarity and confidence. Others simulated practice interviews to build fluency in disciplinary vocabulary and professional discourse.
Students also used AI to level texts, moving beyond a mindset of “read this and give me answers” toward deeper processing of complex academic language. They realized that during the AI-proof elements of the summative cycle, answers untested through meaningful refinement wouldn’t hold up, they needed to actually understand the material. Requesting leveled versions of academic texts became a more effective strategy than simply memorizing and regurgitating AI-generated summaries. For example, when one student included a generic sports stock image in a presentation, a classmate asked, “Why you got basketball players for SDG 16, bro?” Peers called out unprocessed thinking as quickly as I could, making the learning authentic and accountable. Most powerfully, they used AI to game out their arguments: “What are likely responses to this point from another expert during the summit?” and “What are some ways I can respond? Let’s play that out.”
In these moments, students began to frame their decisions through Aristotle’s notion of proportional equality, something we had studied as giving each what they need to flourish. This wasn’t a serendipitous realization; it was a conscious awareness they built of how Aristotle’s ideas applied to the work they were engaged in. They recognized that when they had the power to create their own proportional equality, they could achieve what Aristotle described as eudaimonia. Universal Design for Learning, they discovered, wasn’t about automated regurgitation. It was about ethical, empowered competence, the ability to identify and address obstacles to learning, whether they were speedbumps or roadblocks, in ways that best suited them.
4. The Loop Is Where Meaning Lives
One of my favorite reflections from a student at the end of the unit simply stated, “Researching with AI is harder than it looks.” Students began to realize that when they used AI without intentional design and meaningful grounding, it amounted to little more than hollow sentences that sounded smart but couldn’t stand up to authentic application.
Within the decision loops of our SDG Summit unit, AI use became discourse. Students didn’t just accept AI-generated summaries of global policy papers or SDG data, they shifted to testing them against academic articles, disciplinary frameworks, and real-world examples. The historians found that AI supercharged their ability to trace lines of continuity and causation, connecting the efficacy of institutions like the League of Nations, the UN, and NATO from World War I to the present day. Political scientists and psychologists approached their research from different angles, but many centered their work on gender equity and LGBTQ+ rights. Using AI allowed them to identify these intersections more quickly and build interdisciplinary strategies for addressing them.
They weren’t just demonstrating understanding, they were shaping it, drawing on their disciplines and the philosophical lenses we had studied. The decision loop helped them test, challenge, and refine AI outputs until their ideas could stand up to questions grounded in both disciplinary knowledge and ethical frameworks from their classmates, who were the experts in the room, with AI as a thinking partner throughout. In the end, it wasn’t about efficiency. It was about building knowledge they could own, defend, and apply with purpose.
5. Locke’s Argument: Education Is Preparation for Liberty
The unit culminated in a whole-class summit negotiation. The guiding question we had started with anchored the discussion: How can we collaborate in innovative ways to harness diverse perspectives to ensure international sustainability and interconnection during the third term of an isolationist administration?
Students came prepared with annotated position papers, AI-assisted outlines, and highlighted excerpts from academic articles and UN reports. But in the deliberation itself, none of that could speak for them. The historians traced how the failures and successes of the League of Nations, UN, and NATO could inform present diplomatic strategies. Political scientists argued that without addressing gender equity and LGBTQ+ rights, any sustainability goals would lack legitimacy and public trust. Economists challenged whether carbon taxes could remain viable in a protectionist trade environment. Psychologists and business analysts debated how shifts in global leadership would impact community resilience and private sector adaptation. Throughout, I took notes, tracking contributions that met our assessment standards.
Across disciplines, students realized that every step of the unit had been preparation for this moment, and that they had been engaged in a second, meta-level deliberation all along: a deliberation with AI itself. Each decision loop had been an opportunity to build a kind of human-machine consensus—prompting, evaluating, refining, and ultimately deciding what was worth carrying forward. Drafting cover letters and resumes with AI hadn’t secured them disciplinary roles; they still had to defend their expertise in live interviews. Using AI to build policy briefs and simulate counterarguments helped them anticipate challenges, but the deliberation itself was unscripted, leaving them open to wrecking-ball inquiries about whether Rawls would have approved of their work. Their classmates, experts in both disciplinary content and ethical reasoning, pushed back with questions no AI could anticipate.
By the end of the summit, I didn’t need to make the connection to Locke for them. They were making it themselves. In our post-deliberation debriefing, students said liberty wasn’t about having AI as an option to use or ignore. Real liberty in working alongside a machine came from reasoning with it, reflecting critically on its outputs, and making decisions they could stand behind. AI hadn’t done the thinking for them; it had made their thinking sharper.
One More Loop: Teaching with AI, Deliberately
The decision loop isn’t just a student exercise in judgment. It’s a space for teacher deliberation, too. Drawing on Eric Hudson’s insight: the urgency of teaching AI isn’t about skills students will need someday, it’s about how AI is already transforming how they think, write, and learn today. And we are just as engaged in this shift as they are. We must also adapt: creating new models, responding with agility to developments we can’t yet predict, and staying as deep in the practice as our students—investing ourselves in a sea change in which we are already implicated.
Writing this article was its own deliberation. I prompted AI, engaged its outputs as a thought partner and gamed out arguments in dialogue with it, until they felt true to the story I wanted to tell (all within TIE’s Artificial Intelligence Article Guidelines). At each step, I practiced the same democratic process I ask of my students: reasoning with the machine, refining its recommendations, and deciding what served my purpose and values. Beyond structural and grammatical suggestions, at all points, I needed to take complete creative control because AI couldn’t capture nuances like our philosophical framing or the cheekiness of that one student who will undoubtedly be the primary challenge in my International Baccalaureate Theory of Knowledge class next year. In that sense, teaching with AI is itself a loop of deliberation—a collaborative, human-machine conversation where meaning and decisions are co-constructed.
When we teach students to be the monkey in the middle, we’re inviting them into that same space of ethical accountability. We’re asking them to build conviction, courage, and compassion—to hold the ball, win the game, and keep their humanity at the center of everything they create.
Key Takeaways for Educators
Keep students at the center. Use metaphors, routines, and protocols to consistently frame AI as a tool that demands human attention, decision-making, and agency.
Ground AI use in disciplinary ethical frameworks. Integrate lenses from philosophy or social theory to ensure students use AI with fairness, justice, and purposeful conviction.
Pair AI-supported tasks with “AI-proof” assessments. Design final products that require live reasoning, discussion, or application beyond what AI alone can generate.
Emphasize design for flourishing. Help students use AI to design learning pathways that meet their needs—clarifying texts, simulating arguments, or translating rubrics—while holding them accountable for synthesis and the final product.
Model your own decision loops. Show students how you use AI as a thinking partner, not a shortcut, to reinforce accountability and transparency in ethical AI integration.
References
Hudson, E. (2025, April 27). AI and the teaching of writing. Substack. https://open.substack.com/pub/erichudson/p/ai-and-the-teaching-of-writing?utm_campaign=post&utm_medium=web
World Economic Forum. (2025, January). Future of Jobs Report 2025: 78 million new job opportunities by 2030 but urgent upskilling needed to prepare workforces. https://www.weforum.org/press/2025/01/future-of-jobs-report-2025-78-million-new-job-opportunities-by-2030-but-urgent-upskilling-needed-to-prepare-workforces/
UnconstrainEd. (n.d.). Retrieved June 25, 2025, from https://unconstrained.co/
Bill Tolley is an international educator and curriculum leader who serves as Head of Individuals & Societies, Theory of Knowledge coordinator, and Personal Project coordinator at the International Community School of Addis Ababa. He helps students and teachers navigate ethical AI integration, interdisciplinary inquiry, and human-centered learning across all his roles.