Cases of Cheating With AI at Some UK Universities is Up 15x
Plus, Chegg lays off even more staff. Plus, a great video on AI creating personal and reflective writing.
Issue 323
Subscribe below to join 4,196 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 42 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month. Thank you!
Times Higher Ed: AI Cheating Cases “Soar” in UK
ChatGPT, please use what is publicly known about cheating in college, and cheating with generative AI specifically, to write the least surprising headline possible.
No need. Times Higher Ed (THE), which continues to be the hands-down leader in covering academic integrity, has it already:
Student AI cheating cases soar at UK universities
I tease because — come on. That’s like reading a headline that water is wet. But that is not THE’s fault. Personally, I am glad for the coverage. In fact, please, THE, keep telling us what we should know. Keep putting it in our faces. Make someone do something — anything — about it.
As it’s so predictable as to be nearly unremarkable, I don’t want to grind down every splinter of the story. Yes, cheating with AI continues to increase, largely unabated in any meaningful way as well-meaning voices shout about “embracing AI” and how it’s not about cheating anymore — whatever that means.
Anyway, the subheader of the THE story is:
Figures reveal dramatic rise in AI-related misconduct at Russell Group universities, with further questions raised by sector’s ‘patchy record-keeping’ and inconsistent approach to detection
The Russell Group, for Yankees and others who may be unfamiliar, is the 24 schools that Americans may think of as full research universities — flagships, Ivy League, football program schools.
With that, here is how THE starts their story:
Academic misconduct offences involving generative artificial intelligence (AI) have soared at many leading UK universities with some institutions recording up to a fifteenfold increase in suspected cases of cheating.
While I do hope that 15x grabs your imagination, I am not surprised. Not at all.
Though before we get into the numbers, please keep in mind two very important things — these are cases of likely misconduct that are reported to the administration or adjudication bodies. Most cases of academic misconduct are either undetected or unreported. Credible estimates are that just 1% of cheating is routed to formal processing. So, 100 cases of cheating does not mean 100 incidents of cheating, it’s just whatever small percentage was caught in the skimmer.
Also, it’s possible, if not outright probable, that a university with a large number of formal cheating cases has less cheating than a school with low or no case numbers. That’s because tactics to find cheating, and likely consequences for misconduct, reduce cheating. There’s no exact metric to that, though the research is quite strong. Which means, to rephrase, a school with a thousand cases of misconduct probably has less cheating per student or per course than a school with 30 cases.
More importantly, a school with a relative high number of cases is evidence of effort to address the problem, which is good. Praise is due. It’s the schools with absurdly low incident cases that I worry about.
With that, THE reports:
New figures obtained by Times Higher Education indicate that suspected cases of students illicitly using ChatGPT and other AI-assisted technologies in assessments skyrocketed in the past academic year while the number of penalties – from written warnings and grade reductions to the refusal of credits and failure of entire modules – has also increased dramatically.
At the University of Sheffield, there were 92 cases of suspected AI-related misconduct in 2023-24, for which 79 students were issued penalties, compared with just six suspected cases and six penalties in 2022-23, the year in which ChatGPT was launched. At Queen Mary University of London, there were 89 suspected cases of AI cheating in 2023-24 – all of which led to penalties – compared with 10 suspected cases and nine penalties in the prior 12 months.
At the University of Glasgow there were 130 suspected cases of AI cheating in 2023-24 – with 78 penalties imposed so far and further investigations pending, compared with 36 suspected cases and 26 penalties in 2022-23.
I have not even an educated guess as to whether 92 or 130 cases is the right number of cases at these schools. No idea. But I do know that this, as reported by THE, is just not within the universe of comprehension:
The London School of Economics said it had recorded 20 suspected cases of AI-related misconduct in 2023-24, and did not yet have data for penalties, compared with fewer than five suspected cases in 2022-23. Meanwhile, Queen’s University Belfast said there were “zero cases of suspected misconduct involving generative AI reported by university staff in both 2022-23 and 2023-24”.
Zero? Really?
All that tells me is that Queens University Belfast has a major, major problem. More than likely, London School of Economics as well.
And what’s the one thing that may be worse than no cases at all? Having no idea what is happening in your school:
Other institutions, such as the University of Southampton, said they did not record cases of suspected misconduct, and where misconduct was proven it did not identify specific cases involving AI. The universities of Birmingham and Exeter, as well as Imperial College London, took similar approaches, while the universities of Cardiff and Warwick said misconduct cases were handled at departmental or school level so it would be too onerous to collate the data centrally.
Remember what you read a bit ago about most cases of misconduct not leading to formal action? There you have it. Even when it’s found — which is rare — many schools still deal with cheating kind of willy-nilly, leaving it to teachers or to departments, making them ignorant about what’s happening with their own students in their own schools.
Just imagine this idea in a war context — sending out bombers every day and not keeping any record of how many are coming back. Is bombing working? No idea. Can we adjust anything to make it more effective? No idea. Don’t ask us. Counting is hard.
Impossible to fix a problem you don’t know you have. Which I increasingly suspect is the goal.
THE interviewed Thomas Lancaster, “an academic integrity expert,” who said:
“I am concerned where universities have no records of cases at all. That does not mean there are no academic integrity breaches,” he added.
True. And, me too.
By the way, the quotes after Lancaster’s name as “an academic integrity expert” is by design since he’s inexplicably and unjustifiably decided to join up with Chegg, which is probably the world’s largest industrial cheating provider (see Issue 298). Just a coincidence then that he also told THE that “defining and detecting” AI misuse was “difficult.” Sure.
THE needs to find a different interview subject or at least tell readers that he has joined Chegg in a formal, perhaps even compensated, role.
THE also includes Michael Veale, “associate professor in digital rights and regulation at UCL” to whom the paper attributes and quotes:
[Veale] said it was understandable there was not a consistent approach given the difficulty in calling out AI offences.
“If everything did go centrally to be resolved, and processes were overly centralised and homogenised, you’d also probably find it’d be even harder to report academic misconduct and have it dealt with. For example, it’s very hard to find colleagues with the time to sit on panels or adjudicate on complex cases, particularly when they may need area expertise to judge appropriately,” said Dr Veale.
With respect, I disagree. Not that it’s hard to find people to sit on panels, I am sure it is.
I disagree that this is an acceptable answer. First, we’re talking about integrity and academic quality. Tell me, what’s more deserving of someone’s time? Second, the “we can’t find anyone” is just nuts. Compare that to someone saying, “we can’t find anyone who’s willing to be a Judge in these complex criminal cases, so we’re going to let the police hand out justice — or not — as they desire. And because we’re doing that, we’re not going to know anything about criminal activity or the administration of justice because, you know, people be busy.”
Here’s an idea — if integrity is as important as schools always say it is, pay people. Hire staff for these panels or to handle these cases. There are people with expertise in this area. People can learn it. But no, we’re going to go with the “it’s hard to find people” thing.
I’m also not sure why having a “centralised and homogenised” process would make it “even harder to report academic misconduct and have it dealt with.” That’s just confusing.
Maybe I’m wrong but I thought there was general agreement that justice, to be just, needed to be both blind and consistent. And, ideally, informed by impartial policy and temperament. Being opposed to centralizing — or even tracking — misconduct cases is oppositional to all of those requirements. It is a bewildering position.
Finally, THE interviews:
Des Fitzgerald, professor of medical humanities and social sciences at University College Cork, who has spoken out about the growing use of generative AI by students
Fitzgerald says, in part:
“The reality is that this is a whole different scale of problem than what typical plagiarism policies or procedures were meant to deal with – imagining you’re going to solve this via a plagiarism route, rather than a whole-scale rethinking of assessment, and certainly without a total rethinking of the usual take-home essay, is misguided,” said Professor Fitzgerald.
Oh, “rethinking of assessment,” That banal and useless suggestion — got it. For every one of these I read, I get two professors telling me that they really wish people who say this would shut up.
Let me try this a nicer way. In any class with more than five students, there is no practical form of assessment that comes coated in anti-cheating resin. None. Rethink it all you want.
That may not have been nicer.
Yes, cheating is probably up. Cheating with generative AI is definitely up. Some schools are in active response, others don’t seem to care. At least they do not seem to care enough to do much about it. Same old story.
Chegg Lays Off Even More Staff as Inc Magazine Says “Prospects for Survival to Grow Increasingly Dim”
Oh, Chegg.
What’s that German word, schadenfreude? I have that.
News is out this week that Chegg, facing more financial losses, will reduce its workforce again, this time by a chunky 21% — some 441 people. In June, Chegg cut 319 people.
From Inc Magazine’s coverage:
The latest sign of struggles at Chegg, came this week when the company announced third-quarter losses of $212.6 million.
Chegg lost more than $200 million in 90 days and is axing more than one in every five employees. Fun fact - the word decimation comes from the Roman legion practice of killing one in every ten men. What Chegg announced is worse.
I hope you shorted. In February, 2021 — in the fever of massive online learning — Chegg stock was selling for more than $113 a share. Yesterday, it was trading at $1.76.
Inc writes:
its prospects for survival to grow increasingly dim.
Here’s to hope.
More on the numbers, via Inc:
[the loss] was fueled by a decrease of nearly half a million subscribers, who now total 3.8 million. At its high mark in 2022, nearly 5.3 million people were paying around $20 per month for access to Chegg’s premium study services. But continued declines in its business since then have produced three straight years of declining revenues at the Santa Clara, California-based firm, according to news site SFGate, with 2024 losses to date totaling $830 million. Chegg’s 2021 market cap of over $14.5 billion slid to under $160 million earlier this week.
Again, hope you shorted.
Also, this from Inc is as close to an accurate description of what Chegg does as you will ever find in mainstream media:
AI applications now offer similar services to students at little or no charge, and with far lower overhead. Chegg paid a small army of people to research topics, write replies to questions frequently posed in classes or on tests, and respond to specific subscriber queries—all retrievable from its database. Now increasingly sophisticated automated chatbots produce the same answers cheaply and effectively in real time.
There it is — Chegg sold “replies to questions frequently posed in classes or on tests.” They sold — still sell — answers to assessment questions, including tests. But now, as Inc points out, that junk is free.
Related, give me my favorite category, unintentional admissions, for $400, Ken. From Inc:
“The speed and scale of Google’s AIO rollout, and student adoption of generative AI products, have negatively impacted our industry and our business,” [Chegg’s CEO] said.
Let me see if I understand, Google is giving away answers to just about anything now and that’s hurting your business, which was — what again? (see Issue 279).
Inc says Chegg is now trying to be a generative AI company — giving students answers to questions without human intervention, or with limited intervention. Those answers will be cheaper and faster, of course. But, Inc reports:
even if it can, the company would likely then have to grapple with accusations that educators have leveled at both its previous services and the tech giants’ more recent AI answer applications: that those tools frequently allow many students to cheat, even plagiarize, rather than learn as they’re supposed to.
Smushy — but credit to Inc for at least mentioning the cheating. Most don’t.
Inc does not mention that Chegg is also being hounded by legal challenges over intellectual property and investor fraud (see Issue 55 or Issue 280), which I feel is important.
Bottom line, times ain’t great over at Chegg. Can’t say I am sad about it.
Important Video on AI and Reflective Writing Assignments
Benjamin Miller, a professor at the University of Sydney, has a very helpful and thoughtful 14-minute video on generative AI, and reflective and analytical writing assignments.
It’s sponsored or done in partnership with TEQSA, Australia’s regulatory authority tasked with, among other things, using Australia’s anti-cheating laws to regulate academic misconduct and the companies that sell cheating services. The United States, by stark contrast, has no federal entity to do anything whatsoever about cheating and cheating companies and no law that would allow action anyway.
Based on this video alone, this is a great service from TEQSA. If you’re an educator who assigns any kind of writing — but especially reflective or analytical writing — I suggest giving 15 minutes to the video.
In it, Miller says that some teachers are assigning reflective writing, asking students to share personal experiences or details in their writing, as a way to bypass the use of generative AI. But, he says:
Some educators suggest that setting reflective writing tasks lowers the incidents of students using generative AI. The concern here is that when a teacher says the use of AI is low, what they are really saying is that use of AI I can detect is low.
And, more pressingly:
Reality is that freely available tools used with high quality prompts produce reflective writing that is indistinguishable from reflective writing created before gen AI was widely available.
Yup.
In the video, Miller discusses just one popular and available tool that can be used to produce reflective writing with personal details. That site also offers “writing camouflage” to help users avoid AI detection.
Which I say again, if AI detectors do not work — as some incorrectly claim — why is there any need for camouflage?
Anyway, Miller also shares that some educators are asking students to analyze and comment on AI written material or images, also assuming that AI cannot well assess or critique its own output. Miller says:
If these tasks are supposed to lower the use of generative AI, or if they’re designed with the assumption that generative AI cannot reflect on its own outputs, then teachers might be disappointed.
Miller shows several examples of AI analyzing and writing about bias or other shortcomings in material it produced.
He also says:
The quality of evaluation, like the quality of reflective writing, depends on the quality of the prompt the user can create. But there are numerous generative AI wrappers that can make up for any shortfall in a user’s ability to prompt a large language models effectively.
Again, yup.
Miller also stresses the real unfairness in catching students with lower AI skills or strained resources that make them unable to buy AI wrappers or camouflage services. He says those with better skills or more money:
Arguably do less work and less learning and sail through without detection
True again.
Though I’d argue that the solution is not letting everyone escape notice and potential sanction, it’s getting better at catching those who are more skilled or better able to afford tools that enable or conceal the cheating. It’s one thing to take a shortcut and not do the work of learning. It’s a next level to take that shortcut and wipe down your fingerprints. Both are not great, but one shows knowledge of misconduct and intent to deceive.
Miller also advises that educators downgrade the value of finished products of assignments and instead focus on assessing the process of creation. In other words, grade the work and effort, less on the final product. Seems reasonable to me.
Near the end, Miller says that AI’s capability to mimic reflective and analytical writing is:
Not the end of reflective thinking and it’s not the end of the essay or product focused assessment, but it should be the end of thinking we can design assessments that AI cannot complete
Say it again. There is no cheat-proof assignment or, for the most part, any cheat-proof assessment. We should stop telling teachers to “rethink” or “redesign” those things. It is neither helpful nor effective.