(370) Newspaper: Almost 7,000 UK Students "Caught" Using Generative AI to Cheat
Plus, cases of cheating on Australia's high school competency exams double. Plus, you just can't make this stuff up - a perfect gaffe from the University of London.
Issue 370
Subscribe below to join 4,732 (+15) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
The Guardian Covers Continuing, Escalating AI-Fueled Cheating in UK Colleges and Universities
We’ve read this before — colleges and universities in the UK are seeing surges in cheating incidents using ChatGPT and other AI-powered bots.
In July of 2023, a newspaper reported that hundreds of students had been formally brought to inquiry over suspected unauthorized use of AI technology to misrepresent their academic work (see Issue 222). Now, with this new reporting by The Guardian, the number is “nearly 7,000.”
From the paper:
Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg
A fun note, in Issue 222, about the hundreds of cheating cases involving AI, I wrote:
The full number is absolutely higher. That, as one may say, is not an iceberg. That is the tip of the iceberg.
Well, that was fun for me at least.
From the story:
The Guardian contacted 155 universities under the Freedom of Information Act requesting figures for proven cases of academic misconduct, plagiarism and AI misconduct in the last five years. Of these, 131 provided some data – though not every university had records for each year or category of misconduct.
From which, the paper found:
A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.
Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.
Some people are going to read that and say that 5 in 1,000 is not a significant problem. Or that even 7 in 1,000 is insignificant. For all the good it will do, let me preemptively say that such a position is ill-informed and dishonest.
That’s because the 7,000 or so cases is nowhere near the full number of cases or incidents of cheating. For one, the records on which The Guardian bases their story are incomplete and inconsistent. Two, very few incidents of misconduct are discovered. Three, few of those that are discovered go to a formal inquiry and consequence process. And four, these are numbers of “proven cases” of misconduct, which represent of a fraction of the formal cases.
In other words, seeing 7,000 stars in the night sky, it’s crucial to remember that you’re not seeing the whole sky, or what’s well beyond your vision.
It’s interesting that The Guardian also tracked formal cases of plagiarism, which the paper says have decreased significantly since the common arrival of ChatGPT and related word-pickers:
In 2019-20, before the widespread availability of generative AI, plagiarism accounted for nearly two-thirds of all academic misconduct. During the pandemic, plagiarism intensified as many assessments moved online. But as AI tools have become more sophisticated and accessible, the nature of cheating has changed.
The survey found that confirmed cases of traditional plagiarism fell from 19 per 1,000 students to 15.2 in 2023-24 and are expected to fall again to about 8.5 per 1,000, according to early figures from this academic year.
I take that with some water because many educators consider that using AI to do student work is plagiarism, so I’m not sure the types of cases tell us too much. Except perhaps when a case is recorded as AI specifically. That is more or less confirmed by the fact that, according to the paper:
More than 27% of responding universities did not yet record AI misuse as a separate category of misconduct in 2023-24, suggesting the sector is still getting to grips with the issue.
Since there’s zero chance that 27% of universities simply did not have any AI-powered cheating, there’s a gap somewhere. Some, as mentioned, is probably a classification issue. Some schools also just do not want to find AI cheating, disabling AI detection for example. Or buying the hype that AI is inevitable and somehow a progressive education strategy, convincing themselves that turning in AI-drafted answers for college credit is perfectly acceptable.
I’m just saying again that the reported 7,000 figure is not within the conceptual framework of actual AI cheating. It’s like trying to estimate the number of fish in the ocean by counting the number of fish in a random fisherman’s net on a given Thursday.
To which, The Guardian, rightly reports:
Many more cases of AI cheating may be going undetected. A survey by the Higher Education Policy Institute in February found 88% of students used AI for assessments. Last year, researchers at the University of Reading tested their own assessment systems and were able to submit AI-generated work without being detected 94% of the time.
We covered the survey in Issue 345 and the detection test in Issue 325. For the record, the “without being detected” was human detection only. The study did not use an assessment regime that used AI detection technology.
The Guardian also rightly covers how easy it is for students to try to wash their cheating with so-called humanizers — more AI bots that change the original AI text for the often directly advertised promise of avoiding detection. For getting away with cheating, in other words:
Students who wish to cheat undetected using generative AI have plenty of online material to draw from: the Guardian found dozens of videos on TikTok advertising AI paraphrasing and essay writing tools to students. These tools help students bypass common university AI detectors by “humanising” text generated by ChatGPT.
Dr Thomas Lancaster, an academic integrity researcher at Imperial College London, said: “When used well and by a student who knows how to edit the output, AI misuse is very hard to prove. My hope is that students are still learning through this process.”
I note here that we’re now pinning our assessment validity on hope — “My hope is that students are still learning.” As a wise person said to me once, hope is not a strategy. That’s not Dr. Lancaster’s fault. I think he’s right; we are at the hope stage.
The coverage also includes:
Technology companies appear to be targeting students as a key demographic for AI tools. Google offers university students a free upgrade of its Gemini tool for 15 months, and OpenAI offers discounts to college students in the US and Canada.
I’ve been saying this for some time now. AI text providers know way more about who is using their services, and how their services are being used, than we do. If they are “targeting students,” it’s not out of generosity.
So, yeah. We have a massive cheating problem, and it’s only getting worse. But don’t worry, we can hope things will be fine.
Cheating on Australia’s High School Competency Exam Doubles
The Sydney Morning Herald has a story (subscription required) which finds that:
The number of students caught cheating in the HSC has doubled in the past five years, driven by a spike in breaches detected in take-home assessments and plagiarism offences.
Principals say schools are ramping up in-class work and handwritten tasks in a bid to combat risks posed by the rapid rise in generative artificial intelligence and to protect the integrity of the HSC.
Data from the NSW Education Standards Authority (NESA) reveals almost 300 schools registered 1302 malpractice offences in school-based assessments last year, with the bulk detected in take-home written tasks.
The HSC, the Google machine told me, is the national exam of high school competency given in the final years of secondary education.
I love this story because it conveniently allows me to cut and paste my comment from directly above:
So, yeah. We have a massive cheating problem, and it’s only getting worse. But don’t worry, we can hope things will be fine.
Nonetheless, I’d like to direct your attention to the 1,300 malpractice offenses in one year and that “the bulk” were in take-home assessments, which ought to be no surprise at all. We are rapidly approaching the point at which integrity and validity part company with remote assessment. Frankly, I think that point has been good and passed.
To make the point, from the coverage:
The 1302 assessment breaches detected last year involved 1149 students, almost doubling from 595 students in 2019, and well-outstripping the small increase in enrolled students. Take-home written tasks accounted for 841 offences, rising from 431 in the same period.
I’ll save you the math — 841 of 1,302 is 64.6%. About two-thirds of the integrity breaches were from take-home written tests.
Here’s a great chart, from the SMH:
It makes this case well, I think. At the same time, I note that all four of these lines are moving upwards. At home, remote assessment is clearly the center of gravity, but it’s not alone.
And there’s also this:
Plagiarism was the most common cheating offence, accounting for 796 breaches. NESA does not collect data on whether students were found cheating using ChatGPT or other generative AI.
It’s not clear to me whether the 1,300 cases are just from New South Wales, which is one state in Australia. If it is, we can say there were at least 1,300 “offenses” on the HSC last year.
Also from the paper:
Education analysts say that despite the uptick in breaches, many cases remain undetected as schools struggle to keep pace with generative AI and the risks that it can allow students to complete work without demonstrating genuine skill or knowledge.
Many cases do indeed remain undetected. Most, I would say.
Continuing:
Catholic Schools NSW chief executive Dallas McInerney said the intersection of AI and assessment integrity was “arguably the biggest issue confronting education right now”, with universities at greatest risk due to migrating so much learning online.
A paper published last month by Catholic Schools NSW said rising use of AI meant the share of HSC take-home assessments should fall until “the AI threat to assessment integrity can be satisfactorily contained”.
Nothing I can type would add value to that.
Same with this:
Australian Tutoring Association president Mohan Dhall said malpractice as a result of using AI is probably “endemic and vastly undetected”. Teachers can feel unsupported in dealing with the issue, he said, and in discussing potential breaches with parents. “When there is an issue, it is often treated in-house.”
Although — endemic and vastly undetected feels spot on.
Finally, on those numbers again — the 1,302:
There were 297 schools that registered a malpractice breach for take-home tasks, and about 600 schools did not report an offence.
Seriously — LOL. Come on. That’s not credible. And I say again, it’s just magic how you can just never ever find cheating when you don’t want to find it.
Phil Dawson, whom the paper describes as a, “Deakin University cheating detection expert,” is quoted:
“It’s take-home unsupervised work that’s really vulnerable [to malpractice],” he said. “I’d also be most concerned about schools reporting zero cases.”
I mean, yes. The take-home, remote, unsupervised work is really vulnerable. I’d say it’s vulnerable to the point of being useless as a measure of learning.
And I too am “most concerned” about the schools reporting no cases of misconduct. As I said, just not credible. And it means that whatever the actual breach rates are, they ain’t 1,300.
Funny, Sad, Embarrassing, and Perfectly Poetic
Those are my best efforts to describe what’s going on with a recent LinkedIn post, based on something from the Centre for Online and Distance Education, University of London.
I don’t like writing things for which social media is the source. But I’m making the exception here because this is — chef’s kiss — perfect. And because I am increasingly convinced that educational validity and online or remote assessment are incompatible, which I did already mention.
So, here is the post, from Dr Mark A. Bassett, whom I do not think I know. His LinkedIn profile says he is at Charles Sturt University and is:
Associate Professor | Director, Academic Quality, Standards & Integrity | Academic Lead (Artificial Intelligence)
His post reads:
The Summary of Recommendations page in the Managing Academic Integrity in Online Assessment report from the Centre for Online and Distance Education, University of London is an AI slop disaster. Typos aside, it's clear that no one even bothered to read this before publishing.
It has images from the University of London publication showing at least 12 typos and crazy words in what can’t be more than 150 words of text, in an image clearly generated by ChatGPT. Here are his images:
I mean, come on.
If you’re going to publish a 53-page document on “Managing Academic Integrity in Online assessment,” you simply cannot do that. What a joke. And it is a perfect joke because it underscores just how seriously some online education providers take academic integrity, which is to say not all. Despite the 53 pages.
For fun, I put the first quarter of the publication in an AI detector that I trust and, bingo — AI detected in 38 of 53 segments with a 99.9% confidence level. I have not read the full 53 pages, though I am going to have to now. Thanks, Dr. Bassett.
Anyway, I do thank him. For actually reading these reports, and for calling people out on their garbage.
You just can’t make this stuff up.