BECOME A MEMBER! Sign up for TIE services now and start your international school career


Education, Ethics, and Cheating in the Age of ChatGPT

By Peter Merrick
Education, Ethics, and Cheating in the Age of ChatGPT

In November of 2022, OpenAI dropped ChatGPT on the world without any information about what exactly it could do, what problems it could solve or how it should be used and by whom. Its uptake has been a phenomenon. Usage is at its highest during term time and drops off in the holidays from which we can surmise that a great number of users are students (Carr, 2023). Teachers are enthusiastic too when it comes to lightening their administrative work but not in how they might get the most from it. Teachers, for instance, might use ChatGPT legitimately to generate a lesson plan but somehow there is a suspicion that when students use it, it is in some respect “cheating” (Staake, 2023 and Kevin Roose, 2023).

The reality is that if “everyone” is using ChatGPT it is because it is a very useful tool. What then is needed are guidelines as to how it should be used properly. We are specifically interested here in its use by students to do their homework/coursework, what this means for assessment and the question of ethical vs. unethical use (i.e. cheating).

With respect to the impact on assessment, the International Baccalaureate (IB) has published an insight into its emerging process and attitude toward ChatGPT which they acknowledge “will become part of everyday life” (Glanville, 2023).

The IB states that using ChatGPT is the same as getting a reference from a conventional source (like a website or a book) and that including anything it provides and passing it off as a student’s original work is unethical. In their guidelines, it falls to the teacher to police proper referencing in vouching for the authenticity of coursework. Although this sounds reasonable enough, it misses the fact that while ChatGPT may at first appear like a resource similar to any other that may be referenced, it is not an author in the sense that we conventionally understand it. No human being has ever before written the exact words in the exact order that comes from a ChatGPT response. It is not necessary to go into how the technology works; but in this sense, it is not an author. It is a probabilistic assembler of words. Therefore, a reference citation becomes problematic. It is most likely that ChatGPT has been used throughout the written piece and, rather than a reference, it is an acknowledgement that it has been used that needs to be declared.

However, the question arises that if “everyone” is using it then what purpose does the acknowledgement serve? The advice of the IB goes on to state that ChatGPT answers simple questions very well and it is in the solving of more complex problems in a creative way that students will excel. Further, they state that what constitutes “approved” use is some combination of clever prompt refinement and being on the lookout for inherent bias. Because this remains far off from describing a way of integrating ChatGPT’s use into a school where a repeatable process can be defined, shared, evaluated, and improved over time, I would like to suggest that there exists a more nuanced perspective as to how it can be used in schools.

Ethics in an AI world

ChatGPT itself has no concept of cheating — which invites us, as a thought experiment, to adopt the same position for a moment and consider how our attitude might change if we abandoned the idea that ChatGPT use has any sort of ethical dimension. Appeals to ethics may fall on deaf ears. If education is viewed as a game and fellow students are seen to be “cheating” to win, then people may pay lip service to the message and then turn around and behave in a manner that represents the path of least resistance (Weinberg, 2023). Instead, we take it as a given that students will use ChatGPT, and they will either do it effectively or ineffectively. Effective use of ChatGPT is that which demonstrates intellectual engagement with the material at hand. Ineffective use is simply submitting the first answer the bot gives.

The Educational Journey

Think for a moment of the student journey through education as being like a character’s “arc” in a film. What makes a film interesting is how we see the characters develop over time, what they learn, and how they grow. In other words, it’s the journey not the destination. How does this relate to using ChatGPT?

To get something meaningful out of the technology, the student must engage in a conversation known as a thread, which is a series of question/responses. The entire thread can now be included in the student’s submission — not simply the final answer. Up until now and prior to ChatGPT, assessment has been done solely based on the final essay submission which has two components:

  1. the logic of the argument
  2.  the quality of the writing

Assessing a student’s work solely on the quality of their writing is no longer defensible in the age of ChatGPT because one thing it certainly does well is write. However, there are tasks that it will fail either completely or subtly (Mollick, 2023). It cannot spot a logical fallacy or “think” tangentially. This is what students are invited to do and this is surely what the IB means (Weinberg, 2023) when it says students will solve problems creatively.

If we can’t rely on assessing the quality of writing, we are left with assessing the logic of the argument. Given that ChatGPT makes mistakes and that no one can predict where or when, catching these errors is an important skill. Building an argument on the “hallucinations” of ChatGPT undermines the quality of the work. A student cross referencing “facts” becomes vital; and so, we begin to see how assessment will change. The objective of the task becomes not only to build the argument but to do it on the basis of verification.

Therefore, we can imagine a world where we replace the stylistic assessment with an evaluation of the work undertaken to arrive at the final submission. This can be termed “show your work,” a concept very familiar in mathematics, which can now be applied to all of the humanities. The best thing about it is that the mechanism to allow teachers to do it is built right into the tool.

The makers of ChatGPT could not have made it any easier to assess the learning journey because a thread is an integral piece that exists as a raw artefact with the simple inclusion of a hyperlink. Threads cannot be altered; they cannot be tampered with; they cannot be forged. It is difficult to imagine a learning artefact more ideal. Yet there is a problem. Much as we might wish it were so, the learning journey captured in a thread is by rights a messy affair. The best of them are filled with tangents as learners disappear down rabbit holes of curiosity only to catch themselves and come back to the main question under consideration.

A teacher might not want to assess a raw thread, so in addition to the invitation to read the whole learning journey, a curated (edited) thread is included as part of the student submission. Here they can omit enquiries that have taken the student down blind alleys or edit responses that are overly verbose. The ability to reflect and edit a thread is in itself a skill worthy of assessment. Therefore, the collation of the whole submission used to arrive at the final essay can replace the assessment of writing style.


What has been illustrated is that there is an alternative to adopting a defensive stance against AI for fear of students cheating. Instead, we can reframe the challenge by simply removing the ethical question from the equation. Students are going to use ChatGPT — our task as educators is to model the difference between using it well or using it ineffectively. The best possible outcome is for students to produce work that demonstrates critical thinking and logical reasoning. Students need to know that ChatGPT is not to be deferred to as being the font of all knowledge. It is there as a starting point for an intellectual enquiry. It is there to be challenged — which turns out not to be a battle between the person and the machine — but rather, simply, a challenge against ourselves where the technology acts as a sounding board to motivate us to go deeper.

To note: This article was not written with ChatGPT. I asked ChatGPT to try and improve it but did not accept any of its suggestions.


ChatGPT Starts to Bounce Back in US as School Year Resumes
David F. Carr. September 7, 2023

20 Ways Teachers Can Use ChatGPT To Make Their Lives Easier Jill Staake. Mar 13 2023

Don’t Ban ChatGPT in Schools. Teach With It.
Kevin Roose. Jan 12, 2023

Artificial intelligence in IB assessment and education: a crisis or an opportunity? Matt Glanville. February 27, 2023

“Am I the unethical one?” A Philosophy Professor & His Cheating Students
Justin Weinberg. May 25, 2023

Centaurs and Cyborgs on the Jagged Frontier
Ethan Mollick. Sept. 16, 2023


Peter Merrick holds a doctorate in computer science and is a certified secondary teacher in the United Kingdom. With a rich background, he has lectured on ethics and artificial intelligence at the University of East Anglia. Peter brings over a decade of experience as a business analyst, having worked with government agencies and private industry giants such as the revenue service, the Department of Work and Pensions, Vodafone, and Telefonica. Upon relocating to Berlin, he dedicated his expertise to teaching English literature and sociology at various international schools, including the bilingual Nelson Mandela School. Peter's diverse journey combines academia, technology, and education, making him a valuable contributor in multiple fields including organizational psychology.

LinkedIn: peter-merrick-ph-d

Please fill out the form below if you would like to post a comment on this article:


There are currently no comments posted. Please post one via the form above.