410: Professor Uses Trojan Horse to Catch AI Use, And Why He Bans AI
Plus, a gone story from University of Oregon. Plus, schools using AI to scan college admissions essays. Plus, an ask.
Subscribe below to join 5,025 (+15) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year, suggested. You can also support The Cheat Sheet by giving through Patreon.
Professor Catches Students With Trojan Horse, Shares Why He Bans AI
Huffington Post has a story, essay from Will Teague, an Assistant Professor of History at Angelo State University in Texas.
It’s a compelling must-read for all educators, in my opinion.
Start, One Error
The headline is:
I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking.
They were not shocking. But this is only a small part of this article, which, again I highly recommend reading. I’ll review some of the details of the article here but so as to clear it off early, Teague only gets one thing wrong — kind of. He writes about AI detection:
Plagiarism detectors have and do work well enough for what I might call “classical cheating,” but they are notoriously bad at detecting AI-generated work. Even a program like Grammarly, which is ostensibly intended only to clean up one’s own work, will set off alarms.
Maybe this is just unclear writing but it’s also possible he does not understand that plagiarism detectors do not detect AI-created text since it’s not plagiarism. And that he does not understand that Grammarly is an AI-generator. It writes for you. It should be flagged by an AI detector. It’s AI. Using “even programs like Grammarly,” and talking about plagiarism detection in this context leads me to believe he’s very, very behind the times on the technology.
But I’m casting all that aside for the meat of his essay.
The Trojan Horse
Setting up his Trojan horse experience, he writes:
Since the spring semester of 2023, it has been apparent that an ever-increasing number of students are submitting AI-generated work. I am no stranger to students trying to cut corners by copying and pasting from Wikipedia, but the introduction of generative AI has enabled them to cheat in startling new ways, and many students have fully embraced it.
Yup.
To be more “deliberate” about AI use in his classes, our professor created an assignment with a Trojan horse — text that AI can read but students can’t. Usually, it’s something such as using white font color on white background, invisible to us, but a computer reads it just fine. Some teachers use the trick to tell the AI to use the word “banana” or “Argentina” in the answer, or some such thing.
In this case, Teague assigned writing about a book about an 1800 slave rebellion. His Trojan horse asked the AI to examine the book from a Marxist perspective. And:
I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.
Correction: at least 39% of the submissions were AI-generated. He only caught the ones who fell for the Trojan horse or confessed without evidence, which tells us some students used AI but were not horsed. If students used a paraphrasing humanizer or recognized that the AI assignment had added gibberish about Marxism and removed it, they’re not in the 39%.
Also, 39% doesn’t count the other kinds of cheating, such as just farming the assignment out to Chegg, or employing an essay writer, or any of a hundred other ways to cheat that assignment.
Nonetheless, is 39% obviously cheating a shocking number? Not to me. Not at all. But our professor says:
The percentage was surprising and deflating.
This too is important:
I received several emails and spoke with a few students who came to my office and were genuinely apologetic. I had a few that tried to fight me on the accusations, too, assuming I flagged them as AI for “well written sentences.” But the Trojan horse did not lie.
Students, and others, have been sold a pile of bologna that AI detection flags sentences that are too well written. And that students can just insist they wrote something and pin their defense on how AI detection is faulty. But whatever, in this case, there’s evidence.
I know some in education find this Trojan horse approach to be unethical, a form of gotcha. It will not surprise you that I find no issue with it whatsoever. I’m not worried about catching students who are cheating. If you do the work of learning, you won’t have to worry about it.
And I think most people feel this way. At my presentation a few weeks ago at OLC, someone asked whether these Trojan horse things worked. I said they did, but some schools consider the tactic unethical. I was asked why. I’m not sure I had a good answer.
In any case, back on track. This great essay really has two parts. One is the Trojan horse and I’ll end that part with our professor asking:
Am I a professor or an academic policeman?
And deciding:
But for my students, I decided to not punish them. All I know how to do is teach, so that’s what I did.
For those he caught or who confessed, he assigned another piece of writing. At least one student used AI to complete it. I suspect several just got smarter about not being caught the second time around.
I’ll also argue here that if you catch cheating and no sanctions follow, you are indeed teaching. Although the lesson is not a good one.
On AI Use
The second part of this essay is, I think, more important. But that’s probably because the story of students cheating with AI, protesting the outcome, not being held accountable is trite and depressing. I’ve run dry on words about it. In this second act, professor Teague gets to why AI in education is, in his view, not a good idea.
Because it’s more pedagogy, practice, and philosophy, I’m going to quote less of it here. But please, again, go read it. I find it important and at the core of why cheating — and cheating with AI — is so problematic.
Teague cites the advice he and other teachers are being given not to ban AI, but to teach students how to use it:
There’s a lot of talk about how educators have to train students to use AI as a tool and help them integrate it into their work.
But he does not agree and writes:
Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper “from a Marxist perspective.” Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn’t.
I had at least eight students come to my office to make their case against the allegations, but not a single one of them could explain to me what Marxism is, how it worked as an analytical lens or how it even made its way into their papers they claimed to have written. The most shocking part was that apparently, when ChatGPT read the prompt, it even directly asked if it should include Marxism, and they all said yes. As one student said to me, “I thought it sounded smart.”
How do I assign students an AI-generated essay for assessment if they don’t have the basic knowledge to parse said essay? I can’t and I won’t.
Teague says he cannot force his students to learn. Or, I’d argue, care. But that he:
won’t be complicit in exposing them to even more AI in my classroom.
Strong stuff.
As I try to wrap this up, this is important too:
I have no doubt that many students are actively making the decision to cheat. But I also do not doubt that, because of inconsistent policies and AI euphoria, some were telling the truth when they told me they didn’t realize they were cheating. Regardless of their awareness or lack thereof, each one of my students made the decision to skip one of the many challenges of earning a degree — assuming they are only here to buy it (a very different cultural conversation we need to have). They also chose to actively avoid learning because it’s boring and hard.
Right. I mean, I’m less sure that students were really confused as to whether submitting an AI-generated essay for a grade was cheating. But that’s not the point. Cheating in all forms — including AI — is a choice “to actively avoid learning because it’s boring and hard.”
And that is a really serious problem.
There’s more in the essay, but I will end with this because I think it’s a heartbeat point. Teague writes:
a handful [of students] said something I found quite sad: “I just wanted to write the best essay I could.” Those students in question, who at least tried to provide some of their own thoughts before mixing them with the generated result, had already written the best essay they could. And I guess that’s why I hate AI in the classroom as much as I do.
Students are afraid to fail, and AI presents itself as a savior. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.
Further comment from me would only dilute his point.
Colleges Increasingly Using AI in College Admissions, Including Reading Essays
Fortune, via AP, (subscription required) has a story about colleges increasingly using AI to speed up and save money on their admissions processes, including scanning and scoring admissions essays and personal statements.
To claim a little credit, I wrote about this in 2021, published in USA Today. Four years ago, I was creeped out by the idea of schools outsourcing such important, personal decisions to a machine. I still am. But I’m not going to get into that too much here, because The Cheat Sheet is about academic fraud and integrity.
Setting aside whether AI can be trusted with such important decisions, this development, the spreading use of AI in college admissions, is related to academic cheating rather directly. My confident guess is that schools that use AI in this way, are inviting student misconduct, creating more space for it.
I think that’s true because the more space there is between student and educator, and between student and academic institution, the more likely cheating becomes.
Personal connections, in other words, limit cheating because the misconduct has added moral, personal weight. It’s not anonymous and nameless. Cheaters aren’t cheating a quiz or course, they’re cheating a teacher or school they know and (hopefully) respect. Consequently, the more that schools treat students like data points to be efficiently processed instead of as individual people, the harder it is for those people to morally connect to the institution.
Shorter: if the school is going to treat students like something to be handled with maximum efficiency and limited human engagement, students will be likely to approach the school the same way. The school is setting the stage from the first hello.
Additionally, when it’s known that schools are outsourcing the review of student writing — not entirely, but at least partly, according to the article — it’s hard to really hammer students for not putting effort into their writing and other scholastic work.
Shorter: if you can’t be bothered to read my writing, I can’t be bothered to write it.
When school leaders brag about how much faster and more efficient using AI is, it will be difficult to understand why their students should not share and emulate those priorities.
Hard for me to disagree, to be honest.
University of Oregon Student Paper Publishes, Disappears Story with Cheating Advice
Yesterday, a friend and reader sent over this link to a story at The Daily Emerald, the student paper at the University of Oregon. Today, the story is gone. And I think — I hope — this is good news.
The URL for the story makes it really, really clear what it was about:
https://dailyemerald.com/175584/promotedposts/smart-ai-tricks-for-college-papers-that-beat-turnitin/
We’ve seen cheating providers slip their services into student papers before, selling fraud as student-friendly life hacks. It’s very little surprise that another one made its way to, and into, The Daily Emerald.
What I hope happened is that someone saw it, recognized it was unworthy of actual journalism and probably enticed students to take serious risks, and pulled it.
I did ask the editor at The Daily Emerald about it. If they respond, I will let you know.
But if they did see it and yank it — good for them. Seriously. Good stuff.
Your Support
This is Issue 410 of The Cheat Sheet, which means I’ve been writing it for years now, since 2021.
The past two years, around back-to-school time, I’ve made one or two public asks for financial support. This year, I didn’t. I skipped it because I hate doing it. But now, as the year winds down, I am asking again.
Writing and running The Cheat Sheet does take some money, although not very much. Paying for a newspaper or website subscription here and there. No big deal. But what it does cost is time — lots of time. Two full-size issues a week takes me just about a full day’s time — six to eight hours — a week. And increasingly, the opportunity costs are significant.
So, if you can become a paying subscriber, thank you. It’s a terrible business model, as I remind you that there’s no difference between being a paid subscriber and a free one — none at all. The only benefit is the warm feeling you may get knowing you’re helping out, making this work.
Finally, since this is about our work, I want to thank our three volunteer editors and proofreaders. They read and edit every single issue, twice a week, for free. I am absolutely serious that I could not do this without them. No chance. They need Starbucks gift cards or something. I appreciate them.
Thank you, as always, for reading, sharing, and supporting The Cheat Sheet, and what we are trying to do. I appreciate you too.


