You know the AI panic’s gone too far when universities start using AI to accuse humans of using AI.
At Australia’s Catholic University (ACU), thousands of students just lived through an episode of Black Mirror: Academic Edition. Nearly 6,000 students were flagged for “AI cheating” last year, and the university used an AI tool to make those accusations.
Yep, a machine was basically calling people out for doing exactly what it was doing.
The chaos started when Turnitin, that good old plagiarism detector, added an “AI detection” feature in 2023. Turnitin even warned that it “may not always be accurate.”

ACU used it anyway.
Students like Madeleine, a 22-year-old nursing major, suddenly got emails accusing them of misconduct in the middle of job applications. Her grades were withheld, her reputation tanked, and it took six months to clear her name.
“It was already a stressful enough time of my life,” she told ABC.
“I didn’t know what to do. Do I go back and study or just give up?”
After months of backlash, ACU quietly scrapped the tool in March. Their deputy vice-chancellor later admitted investigations “weren’t always as timely as they should have been.”
Understatement of the year, Professor.
Universities wanted AI to keep students honest. Instead, it outed the system’s own hypocrisy, policing creativity with something that doesn’t even understand it.
Look, the problem isn’t AI itself, it’s people using AI without understanding what it actually is.
Universities love shiny tech buzzwords. So when Turnitin said, “Hey, we’ve got an AI detector!” they jumped on it like it was a moral compass instead of what it really was. Likely a probability meter with anxiety issues.
These detectors don’t “know” anything, they just guess based on patterns. They see words like “thus” or a clean sentence structure and go, “Yep, this smells like robot.”
It’s like accusing someone of cheating because they spell too well.
The hypocrisy here is off the charts. They’re using AI tools built on data scraped from the internet without consent to accuse students of using AI built on data scraped from the internet without consent. That’s a snake eating its own syllabus.
I agree that students are gaming the system with ChatGPT and Gemini, editing outputs, blending prompts but that’s the challenge of education now: To adapt, not panic.
Teachers don’t need to be coders, they just need to understand the limitations of these tools.
THE WHY: IGNORANCE
This one’s a 2.5 star story because the second you let a hallucinating machine decide who’s guilty, you’re not teaching integrity, you’re automating paranoia.
“Look, most of us want a normal life without any drama, but life in this world is always strange, and uncertain.
I don’t need your email. I don’t want to bug you with a billion notifications. All I ask is this, if you felt something here, if this made you think, laugh, or even shake your head in disbelief, just bookmark ‘Averagebeing.com’ and come back tomorrow.
That’s it. No strings. Just you, me, and this stupid world.”