BBC Must-Read on Student Cheating With AI
Plus, tickets open for ICAI Conference in March. Plus, a book worth a look.
Issue 320
Subscribe below to join 4,176 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 41 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month. Thank you!
BBC Covers AI Cheating - a Must Read
The BBC had an article recently on a student who “massively regret[s]” using AI to cheat in college, and, as you may have come to expect, I have thoughts.
First, good for the BBC for covering academic misconduct. It’s still pretty rare and when a top, credible outlet does it, it’s worth a nod.
The article starts by setting up that a university student “felt incredibly stressed and just under enormous pressure” and:
In her desperation - and also suffering with Covid while facing the deadlines - she said she turned to artificial intelligence (AI) to help her write one of the essays.
It’s minor, but I do not love that. I do not love that because it’s designed to create empathy for the transgressor — the person who attempted to commit academic fraud.
I am not heartless and I do understand pressure and desperation, while I also know that feeling helpless does move some students to seek and use shortcuts they otherwise may not. That happens.
But I by no means believe it to be the most significant or most common motivator for misconduct. For many, my sense is that the desperation and stress are rationalization. So, putting it out there that cheating happens when students come under stress may be true, but it’s also enabling of a key internal, self-described reason to cheat.
It’s why cheating providers always market their services as stress reduction and seek to commiserate with students about how awful things are in their schools and classes. In other words — whether they are or not — getting students to believe they are under unique and oppressive stress is really helpful in getting them to make bad choices.
I also understand however, that asking the BBC to find a student who will admit to cheating with AI just because they did not want to do the work — that’s probably harder. Even students who admit to cheating will likely have a plausible rationalization. And stress is a good one.
Anyway, the BBC piece goes on:
Hannah, not her real name, is now warning others about the potential consequences of using generative AI to cheat at university.
She faced an academic misconduct panel, who have the power to expel students found to be cheating.
But there are several other servings of meat to this pretty unbelievable story, including:
Hannah's misuse of AI was discovered when her lecturer routinely scanned her essay using detection software.
You don’t say.
Wait, you mean an AI detector actually caught someone using AI to cheat? Impossible. If you judged by news coverage and social media disgust, AI detectors are only supposed to make false accusations and ruin students’ lives.
Then, from the student, we get:
Hannah added: “I think in my head initially I thought, 'just deny it, don’t say anything', but then I saw on the screen an AI percentage and it was quite high and I just lasted three minutes before I broke down and said I had used AI to help me finish the essay."
So, she used AI to cheat. When caught, she was going to deny it. But the AI similarity score was “quite high.” So, she confessed.
It’s a really, really good thing that some educators and some schools refuse to use AI detectors. Best to keep those eyes closed real tight.
But here’s the kicker. The BBC reports that:
She was cleared as a panel ruled there wasn’t enough evidence against her, despite her having admitted using AI.
I’m sorry, what? She confessed to cheating and the school decided there was not enough evidence? So, no sanction whatsoever other than perhaps the fear of possible sanction.
A week or so ago, Bloomberg drags us through 1,500 words of the horrors of “false accusations” in which the student who said she was falsely accused got — wait for it — a warning (see Issue 318). Now, BBC has a student who admits to using AI to cheat and she gets — wait for it — apparently nothing.
But I’m not even done here. Immediately after informing readers that there “wasn’t enough evidence” to take action, we get:
Hannah said she thinks it was a slap on the wrist designed to serve as a warning to other students.
A warning? What is the warning? Don’t use AI to cheat because if you do we will do nothing whatsoever?
That’s going to work. Well done.
Finally, the BBC treats us all to this gem of a quote and a top nominee for 2024 Academic Integrity Quote of the Year. In a section of the article titled “Embrace It,” is this:
Some universities ban the use of AI unless specifically authorised, while others allow AI to be used to identify errors in grammar or vocabulary, or permit generative AI content within assessments as long as it is fully cited and referenced.
At a bar on the outskirts of Canterbury students here know the limits, and say they only use AI as an aid, like they might a search engine.
A student called Taylor told us: "You’ve got to embrace it. You can ask it questions and it helps you out.
First, “they only use AI as an aid.” Sure.
But the real catch is Taylor who, in a bar mind you, says we have to embrace AI because “it helps you out.” Ah. Good. Well that’s that then. Thanks, Taylor.
I wonder if Taylor thinks we should embrace steroids because, you know, you can take them and they help you out. Or if they think we should embrace auto theft because you can just take a cool car and, you know, that can help you out.
Amazing.
The story has other great bits too, including where someone actually says that UK Universities:
all have codes of conduct that include severe penalties for students found to be submitting work that is not their own
The codes may indeed have penalties. They may even be severe. But they are also quite literally useless.
Tickets and Registration Open for ICAI Annual Conference — March 6-9, Chicago
Tickets and registration are now open for the annual conference of the International Center for Academic Integrity, March 6 to 9, in Chicago.
Programming and events for the conference promise to be well worthwhile though I am deeply worried about their choice of keynote speaker. That guy could be a real problem.
It’s me. I am the speaker. And I apologize already.
A Book That’s Worth a Look
A new book is on its way to the marketplace and maybe, with any luck, to your bookshelf too. It’s called:
The Opposite of Cheating: Teaching for Integrity in the Age of AI
The book, due out in March but available to order now, is by Tricia Bertram Gallant and David Rettinger.
I know both authors and have found them to be very thoughtful and highly reasonable on integrity issues. As such, I look forward to reading what they have to say. You may want to too.
As an academic myself, I think some students who use AI are lazy, but there are lot of students who are neuro-divergent and face real difficulties in submitting traditional essays, write-ups and papers often required in many courses. Further, I believe that faculties who design such assignments that can be done by AI are lazier than the students using the tools. This shows they are not updating themselves and their course structure or their universities are too old school to let them make those changes. Using disciplinary action is never a solution if you are an uninspiring faculty with no real ideas and you stick with age old practices. For faculties with zero creativity, who are obsessed with their students using AI tools, I believe, you should rely on age old physical examinations. Embrace Gen AI in everything, from curriculum design to evaluation. Trust me, you will see tremendous increase in student motivation, classroom interaction and overall learning alongwith improvement in faculty ratings. I teach organization behavior, and in my class I encourage my students to use chatgpt to assist them with class participation. I noticed improved class participation, even from students who didn't feel confident earlier, the interactions have more depth, and the students genuinely learn the topic at the end of class. For a different course, I encouraged my students to a write a paper in one of the three topics and use any resource provided you declare and cite them appropriately. There were majorly three kinds of students, one, people who used 3-4 prompts like write me an essay on this this and submitted the document.., they didn't score well. The second category of students, took the help of AI to define the problem and the contextualize the essay in a particular domain, they then searched the literature on their own, read and summarized the papers, took insights from them, and finally wrote a first draft themselves which later was polished by AI tool. these category of students got the highest marks. Another category of students used interactive AI, where both AI tools and them worked hand in hand to generate and polished the essay, this category got average marks. SO ALL I AM SAYING IS INSTEAD OF FIGHTING THE INEVITABLE CHANGE, WE NEED TO EMBRACE AND ADAPT TO IT.