BECOME A MEMBER! Sign up for TIE services now and start your international school career

ARTIFICIAL INTELLIGENCE

Promoting Responsible Generative AI Use in Schools

By Megel Barker
09-Apr-25
Promoting Responsible Generative AI Use in Schools

The Growing Challenge of Artificial Intelligence (AI) in Schools

A few months ago, a middle school teacher walked into my office, frustrated. She had just received an essay from a student—one that was suspiciously well-written. The vocabulary was advanced, the structure was flawless, and the ideas were nuanced in ways that did not match the student’s previous work. The teacher suspected AI involvement. When she confronted the student, he insisted that he had written it himself, with only “a little help” from an AI tool to fix some grammar.

Situations like these are becoming more common in classrooms worldwide. Teachers, students, and parents find themselves in an ongoing tug-of-war over what constitutes acceptable AI use. Educators worry about originality and learning integrity, students see AI as just another tool in their arsenal, and parents—many of whom use AI themselves—want their children to be prepared for the future.

We are at a crossroads. The way we structure AI use in schools today will shape not only academic policies but also the ethical decision-making of an entire generation of learners.

The question is: How do we provide clarity without stifling innovation?

A Lack of Clear Boundaries

One of the biggest challenges with Generative AI in education is that traditional plagiarism policies don’t quite apply. Unlike copying from a website or another student’s work, AI-generated content doesn’t have a clear source—it’s a synthesis of countless inputs. This makes originality difficult to assess.

At the same time, AI tools like ChatGPT, Grammarly, and Quillbot are already part of how students learn. Some use them to clarify ideas, improve their writing, or generate creative inspiration. Others rely on them more heavily, sometimes to the point of letting AI do the thinking for them.

This raises some critical questions:

  • How do we define AI-assisted work versus AI-generated work?
  • What role should AI play in developing student skills?
  • Where do we draw the line between learning and dependence?

Educators and parents alike need a simple, effective way to classify AI use—one that provides structure without being overly restrictive.

The AI Assessment Scale: A Case for Simplified Guidelines

At a recent Independent Schools Association (ISA) workshop led by Tim Cook, I was introduced to the AI Assessment Scale (AIAS) —a five-tier framework designed to help schools determine AI’s role in assessments. The model offers a structured approach that could be valuable for schools looking to explore AI integration in various contexts.

While the AIAS is a solid framework for some educational environments, I found that some middle school students, particularly those new to AI, benefitted from simplified boundaries to help them make ethical choices without feeling overwhelmed. This insight led me to develop the AI Indicator—a system for Generative AI use in schools that provides a clear, straightforward set of guidelines tailored to their needs.

The AI Indicator: A Simple, Actionable Approach

Instead of a complicated multi-tiered model, the AI Indicator categorizes AI use into three clear levels. Initially, I framed the AI Indicator using a traffic light system, drawing on its intuitive association with rules and decision-making. The idea was to create a clear and familiar way for students, teachers, and parents to understand AI use policies at a glance. However, as I refined the framework, I realized that the traffic light metaphor carried unintended implications—red often signals an absolute stop, while yellow suggests hesitation or caution. Since the goal of this system is not to restrict AI use but rather to clarify its role in learning, I transitioned to using colored circles ( ) instead.

This shift removes any subconscious “stop-go” messaging and presents AI use as a structured continuum rather than a rigid set of restrictions. The result is a more neutral, flexible, and visually clean approach that integrates seamlessly into school policies, lesson plans, and student-facing materials while reinforcing the idea that AI is a tool to be used intentionally, not feared.

Additionally, the shift allowed for a more meaningful interpretation of the colors themselves. Red remains a clear symbol of tasks that require personal effort and original thinking, while Yellow serves as a middle ground, where AI can be used but with accountability. However, the most exciting transformation was in Blue, which now represents “blue-sky thinking”—a term that embodies creativity, open exploration, and limitless possibilities. In this category, AI is not just permitted but encouraged as a tool to enhance innovation, experimentation, and deeper learning. Instead of signaling "Go," as a green light would, blue suggests expansion and possibility, reinforcing that AI, when used intentionally, can be a catalyst for new ways of thinking, learning, and creating.

Red – No AI Use Permitted

  • Tasks that require original thought and personal effort.
  • Examples: Handwritten essays, personal reflections, in-class assessments, critical thinking exercises.
  • Why? These tasks develop fundamental skills that should not be influenced by AI.

Example Task: Personal Reflection Essay on a Life-Changing Experience

  • Scenario: Students write a handwritten personal reflection about an event that shaped their character.
  • Why It’s Red: This assignment assesses authentic personal expression, original thought, and writing skills developed without AI assistance.
  • Expectation: Students must craft their essays independently, without AI-generated content or AI-assisted grammar suggestions.
  • Teacher Guidance: “This assignment is a reflection of your personal experiences and growth. AI cannot capture your unique perspective, so this is a Red task—completed without AI assistance.”

Yellow – AI Use Allowed with Citation and Explanation

  • AI can assist in idea generation, research, or minor refinements, but students must document how they used it.
  • Examples: Research papers, brainstorming sessions, peer reviews.
  • Why? This category helps students use AI responsibly without letting it replace their effort.

Example Task: History Research Paper on the Causes of World War I

  • Scenario: Students must write a structured research paper on the key factors leading to World War I, analyzing primary and secondary sources.
  • Why It’s Yellow: AI can be used for idea organization, summarizing complex sources, and grammar refinement, but students must clearly document how they used AI.
  • Expectation: Students must submit a reflection statement explaining how AI supported their research and writing, including specific prompts used and whether AI-generated content was edited or adapted.
  • Teacher Guidance: “You may use AI tools to summarize sources or refine grammar, but you must explain how you used AI in a short paragraph at the end of your essay.”

Blue – No AI Use Restrictions

  • AI is fully integrated to enhance creativity, skill-building, or exploration.
  • Examples: AI-assisted coding, creative writing with AI input, multimedia projects.
  • Why? Some tasks benefit from AI use without diminishing the student’s role in learning.

Example Task: Creative Writing Project: Reimagining a Classic Fairytale

  • Scenario: Students must rewrite a classic fairytale (e.g., Cinderella or The Three Little Pigs) with a modern twist, using AI tools to generate creative ideas, rewrite dialogue, or explore different storytelling styles.
  • Why It’s Blue: The focus is on creativity, experimentation, and AI’s role in enhancing storytelling rather than replacing critical thinking.
  • Expectation: Students are encouraged to use AI freely to brainstorm settings, character development, or alternative endings.
  • Teacher Guidance: “AI can be used as a tool to enhance creativity—experiment with dialogue, descriptions, or alternative perspectives using AI-generated suggestions.

Benefits for Teachers and Parents

Implementing the AI Indicator doesn’t just help students—it provides much-needed clarity and support for teachers and parents as well.

For Teachers:

  • Eliminates the guessing game: Teachers no longer have to debate whether AI use is appropriate for each assignment. The indicator system clearly defines what’s allowed.
  • Reduces academic misconduct disputes: When students know the AI boundaries, there are fewer cases of “Did they use AI too much?” confusion.
  • Supports differentiated instruction: AI-labeled tasks help teachers guide students based on their individual needs, reinforcing skill-building without stifling exploration.
  • Encourages meaningful AI discussions: Instead of focusing on Generative AI as a threat, teachers can shift the conversation toward responsible AI use and critical thinking.

For Parents:

  • Clarifies expectations at home: Many parents help their children with schoolwork but don’t know where to draw the line with Generative AI. The AI Indicator provides clear guidance.
  • Builds digital literacy: With a structured AI system, parents can have more informed conversations with their children about technology’s positive role in learning.
  • Reduces stress and confusion: No more wondering whether AI-assisted homework will be considered “cheating.” The colored indicators sets the boundaries upfront.

Implementing the AI Indicator in Schools

For schools looking to adopt this approach, here are a few steps to consider:

  1. Clearly Label Assignments: Each task should specify whether AI use falls under Red, Yellow, or Blue.
  2. Teach AI Ethics: Students should understand why these categories exist and how AI fits into academic integrity.
  3. Monitor and Reflect: Teachers should provide feedback on AI use and adjust guidelines based on real-world classroom experiences.
  4. Engage Parents: Parents need to be part of this conversation so they can reinforce ethical AI use at home.

Next Steps: A Call for Collaboration

The AI Indicator is an evolving framework—one that needs real-world testing and refinement. That is why I am inviting educators, school leaders, and researchers to participate in a multi-week action research study to explore how this model impacts student learning and academic integrity.

Key questions we will explore include:

  • Do students and teachers find the AI Indicator easy to understand and apply?
  • Does it improve transparency and reduce confusion around AI expectations?
  • How does it shape students’ approach to generative AI use in their learning?

Participating schools will collect feedback through surveys, focus groups, and teacher reflections. The goal is to refine this model based on real classroom experiences, ensuring it is both practical and adaptable across different school settings.

If you are interested in joining this initiative, let us connect—email me at [email protected].

Final Thoughts

The rise of AI in education is a defining moment. Schools can either scramble to keep pace with rapid change or step forward to lead the conversation. The AI Indicator is not just a framework; it is a commitment to clarity, integrity, and innovation in how we teach and learn.

Now, I turn the conversation over to you:

  • How is your school currently handling AI use?
  • What challenges have you faced in setting AI policies?
  • Would this structured system help clarify expectations?

This is our chance to move beyond uncertainty and shape a future where AI enhances learning, rather than undermines it. We cannot afford to wait for policies to catch up—we must be the ones to build them. Let’s create a system that empowers students to use AI wisely, supports teachers in navigating new challenges, and gives parents confidence in their children’s education.

Rather than pitting academic integrity against artificial intelligence—AI vs AI— let us harness the transformative power of principled leadership to create exponential change. This is our moment to redefine the conversation, moving from conflict to collaboration, from restriction to responsibility.

Not AI vs AI, but AI × AI—where artificial intelligence and academic integrity work together to multiply opportunities for learning.

This article, authored by Dr Megel R. Barker, was developed with the support of AI to refine structure and enhance clarity, demonstrating the very principles of responsible AI use that it advocates.



Megel Barker is a school leader with over 30 years of experience in independent, international, and national curriculum. Megel is currently the Head of middle school at TASIS England.

 

 

 

 

 

 

 

 




Please fill out the form below if you would like to post a comment on this article:








Comments

16-Apr-25 - Judy L
Excellent article. Very clear guidelines for all.

MORE FROM

ARTIFICIAL INTELLIGENCE