Students banned from using ChatGPT to cheat—now professors are the problem

Students banned from using ChatGPT to cheat—now professors are the problem

Artificial intelligence was once the bogeyman of student plagiarism—now it’s the lecturers who are quietly wielding it. As bans tighten for pupils, many educators are offloading lesson prep and feedback to generative tools, and no one told the class.

When Teachers Automate in Secret

It began with a clear edict: learners mustn’t touch ChatGPT or any AI helper, lest they compromise their academic integrity. Yet behind the scenes, a growing number of professors have turned to AI to draft lecture notes, assemble quizzes and even generate personalised comments. Some, like Harvard’s David Malan, have been upfront about deploying a bespoke chatbot for his computer science course, while others, squeezed for time, admit to relying on unnamed generative platforms without reviewing the output in detail or informing their students. The result? A double standard that leaves many young people feeling the rules only apply one way.

A Growing Sense Of Unease

Students aren’t slow to notice when a slide deck or assignment brief feels overly polished, with a repetitive vocabulary or odd images pasted in without context. Northeastern undergrad Ella Stapleton stumbled across a raw ChatGPT prompt in her course materials and promptly filed a complaint—demanding her fees be reconsidered. On sites like Rate My Professors, beefed-up AI content has sparked a slew of bad reviews, as learners criticise cookie-cutter resources and a lack of genuine engagement. The perception of hypocrisy is hard to shake when pupils are punished for using AI, yet instructors quietly pass off algorithm-generated work as their own.

Towards An Ethical Framework

Recognising that blanket bans are futile, some universities are crafting new policies to ensure transparency. At UC Berkeley, any AI-created component must be clearly labelled and undergo human verification before reaching students. Across the UK and Europe, similar guidelines are taking shape, acknowledging that AI will play a role in education—provided its use is declared. A Tyton Partners survey found that nearly one in three lecturers now taps into AI regularly, yet barely any disclose it to their classes. The gap between practice and policy only amplifies tensions, underlining the need for a shared code of conduct.

Moving Forward: Building Shared Responsibility

As Ohio University’s Paul Shovlin argues, the value of AI lies not in the tool itself but in how it’s woven into the teaching process. Some forward-thinking academics now host workshops where staff and students co-explore AI’s strengths and pitfalls, learning to craft prompts effectively and critique machine outputs. This collaborative approach transforms AI from a hidden crutch into a genuine pedagogical innovation, freeing teachers to focus on the human touch—the discussion, mentorship and critical analysis that machines simply can’t replicate.

By insisting on open dialogue and mutual guidelines, institutions can rebuild the trust that’s essential to learning. If teachers and students declare their AI usage from the outset, we might finally close the hypocrisy gap and turn what was once a secret liability into a shared asset.