(350) Scottish Universities Catch 1,000+ Students Cheating With AI, Up 700%
Plus, we need to talk about OpenAI again. Plus, nursing students cheating in Israel.
Issue 350
Subscribe below to join 4,526 (+14) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year.
More Than 1,000 Students Caught Cheating With AI in Scotland
The Scotsman has the story (subscription required) of college students, cheating with AI.
The first paragraph is pretty hard to confuse:
Scottish universities caught their students using artificial intelligence (AI) to cheat in assessments in more than 1,000 academic misconduct cases last year as concerns about the technology grow.
Here are the second and third graphs:
Figures released by institutions under Freedom of Information (FOI) laws show a 700 per cent increase, from 131 in 2022/23 to 1,051 in 2023/24, although many universities have only recently started recording such cases.
Scottish Conservative education spokesman Miles Briggs, who obtained the data, said the trend was “hugely worrying”, and that it was likely to be the “tip of the iceberg”.
Some of the jump is very likely the “only recently started recording” thing. But still, 700%. “Tip of the iceberg.”
Continuing:
Abertay University introduced “unacceptable AI use” as a misconduct category in 2023 and last year dealt with 351 cases, with the complaint being upheld in 342 of them.
That’s 97%. And they represented a third of all misconduct cases, the paper says.
Then:
The next highest number was at Stirling University, which also added a specific AI category to its procedures ahead of 2023/24, and has since investigated 213 incidents, issuing a penalty in 200 of them.
94%.
Robert Gordon University in Aberdeen dealt with 116 incidents in 2023/24, with sanctions imposed for all of them, including “failure and retention of all remaining re-assessment opportunities” in 63 of them.
There were 113 cases upheld at Glasgow Caledonian University last year, following 137 investigations into AI misuse offences.
100%, and 82%. Get it together, Glasgow Caledonian.
Kidding.
The paper has other school-based examples, but I’m not going to go over each one.
Briggs, the minister, also said:
If Scottish universities are seen as vulnerable to students using AI to replace thinking or hard work of their own, it will be hugely damaging for the sector’s reputation internationally.
This is a fun one. Now, do America.
Oh, my mistake — you can’t. In America, we have no idea how much AI misuse happening.
There’s also this:
a survey of 1,000 undergraduates at British universities found there had been an “explosive increase” in the use of genAI in the past 12 months, with 88 per cent saying they used tools such as ChatGPT for their assessments, up from 53 per cent last year.
Naturally, the piece also has people who would prefer to talk about “a rethink on assessment,” instead of addressing the problem — as if the type of assessment will limit the misuse of AI.
We Need to Talk About OpenAI, Again
OpenAI, the maker of ChatGPT, is, in my view, a cheating provider. I believe their pattern of conduct more than justifies such a designation (see Issue 308, or Issue 241, or Issue 349).
But here, I want to highlight the company’s relationships with students, and the truth.
Core Users
I’m not sure it’s getting quite enough attention that, for all the PR hype about generative AI transforming work, ChatGPT’s core users are students.
Here is a LinkedIn post from OpenAI’s “ChatGPT for Education” account. It says:
Students are the # 1 demographic using ChatGPT in the US. Learning and school-related work are their top use cases.
Highlight the phrase “school-related work” as one of the top uses of ChatGPT among the company’s “#1 demographic.” The company is outright telling us who their users are, and what they’d doing — students, academic work.
Here is the graphic that OpenAI shared in that LinkedIn post:
That’s a chart of “AI tool use for 18-24 year old students.” Students. The top 17 uses are all academic, including, “essay drafting” and “exam answers.” The top answer is “starting papers/projects,” a term so vague as to be just about anything.
More on this chart to come.
It’s no wonder whatsoever that OpenAI has been so squirrely — hostile, even — about academic integrity. If using AI in academic work is forbidden, detected, and/or sanctioned, OpenAI is in some serious trouble with their core users. Again, those core users are students doing “school-related work.” That’s directly from OpenAI.
But it’s not just that LinkedIn post I want to share, revelatory as I think it is.
The Report
The post is based on a report OpenAI issued — a report so marvelously flawed and manipulative that it’s perfect.
To start, and because I cannot restrain myself, there’s the image on the cover. If you give it a quick glance, it looks normal. But spend any time on it at all, and it falls apart. The image is obviously fake — generated by AI, no doubt. There’s the text/image gibberish on the building, the mismatched shadows. The supposed students don’t have shadows at all. The street lamps are in bizarre places and on, in the middle of the day.
OpenAI did not use an actual photo of an actual school, they made one up.
I stop on the cover image for this minute because it’s a bullseye perfect symbol for the report itself — at quick glance, it feels right. But spend more than a minute actually reading it and — oh boy. The image is a proxy for the report. The report is a proxy for OpenAI.
My opinion.
To the report itself, it says, among a few things we’re about to explore:
More than any other use case, more than any other kind of user, college-aged young adults in the US are embracing ChatGPT, and they’re doing so to learn.
Obviously, OpenAI cannot know whether learning is taking place. There’s no evidence even offered. In fact, if students are “embracing ChatGPT” for their school work, they probably are not learning. But again we see, more than any other kind of user, college-aged young adults are using ChatGPT.
There’s also this:
Over one-third of 18- to 24-year-olds in the US use ChatGPT, and among these users, over one-quarter of their messages are about learning, tutoring, and school work, according to OpenAI user data.
Here, I call a quick timeout.
We keep reading about generative AI adoption rates, even in education, of 60%, 70%, with the implication that use will eventually, inexorably, reach 100%. So, resistance is futile.
Yet OpenAI says that among 18 to 24-year-olds, their top demo, use is just better than one in three.
Something is off.
Also, we again get “learning, tutoring, and school work.” Largely unsupervised, we have to speculate about what portion of that is academic inquiry and what portion is simple answer grabbing or effort outsourcing. I have my guess.
That graph up top, the report breaks it down a little further:
According to our survey of 1,200 students aged 18-24, AI tools are used for starting papers/projects (49%), summarizing long texts (48%), brainstorming creative projects (45%), exploring topics (44%), revising writing (44%).
And that sounds cool. Maybe. I’m not sure where the line is between having ChatGPT start a paper and having ChatGPT finish it. I doubt many students tell the bot to just start a paper but not finish it.
But the thing to know about these numbers, buried in the report, is that these uses are:
the way students self-label their use
Oh. In other words, “starting papers” means whatever that one student thinks it means. Ah. So, a full first draft? Maybe. A draft and two revisions? Maybe. Accordingly, an eye-roll at that 49% figure — at all the figures — is probably justified because OpenAI did not define any of this.
But wait. Also buried in the report, far away from the results of the survey, is this:
The sample included both AI users and non-users but excluded “AI rejectors” – defined as non-users with little to no interest in adopting AI within the next 12 months.
Oh. They didn’t mention that. OpenAI simply did not survey those who do not plan to use their services. Which means that, when OpenAI writes this:
Students are using ChatGPT for their education
Or this:
Our survey shows that while three in four higher ed students want AI training, only one in four universities and colleges provide it.
It is — misleading. Let’s call it misleading. Referring to “higher ed students” and “students” while only asking some students, that’s misleading. Cough.
But, in the Workplace
But if their curious sample and their evidence-free assertions about learning don’t sell you, OpenAI will be happy to tell you that AI is the essential workplace skill, and you’d better get with the program. For example:
Beneath these nationwide rates of college-aged user adoption, however, lie challenges that America needs to address in order to foster a healthy economy and ensure future economic competitiveness. As employers increasingly prefer candidates with AI skills, state-by-state differences in student AI adoption creates potential gaps in future workforce productivity and economic development. With many employers prioritizing workers with AI knowledge, states with low rates of AI adoption risk falling behind.
Oh, no. We cannot risk falling behind. It’s a challenge that America needs to address. Level up, everyone. Buy more AI.
Later, the report says:
Over 70% of business leaders say they would hire a less experienced candidate with AI skills over a more experienced one without, according to a recent survey.
Wow. That is compelling.
The problem is that the survey they link does not say that. It does not say anything like that. I left the link in so you can go check. If I missed it, tell me and I’ll correct it and apologize.
There is a survey that says that — the 70% thing — though I cannot tell who the survey population was or how the questions were asked. But that is not the one linked to by OpenAI in their report. The report from their Vice President of Education, mind you.
Either way, if students are OpenAI’s top users, and that figure is in the 30-something percent range, the argument for universal and inevitable use in the workplaces seems a little flat. Seems you cannot have both. At least not simultaneously.
But don’t worry, if you happen not to notice any of that, OpenAI has a suggestion.
And I bet you already know what it is — use more AI, buy more AI:
Government and higher ed leaders can play a key role in driving student awareness to ChatGPT’s free products, as well as subsidizing equitable access to the latest models.
Yes — teachers need to drive awareness. And buy the expensive stuff, in the name of equity.
OpenAI wants to sell product. That is their job. They’ll tell you that students are learning, or that AI is an essential job skill. Just don’t look too long or too close.
A Dozen Students Caught Cheating Nursing Exam in Israel
Local news coverage from Israel says that a medical center caught 12 students cheating on a nursing exam and has suspended them from their study program.
Personally, I find the article passively racist, but nonetheless, the academic integrity part says:
Twelve nursing students were caught cheating during an exam at Barzilai Medical Center using wireless earpieces connected to their mobile phones, which transmitted answers in real time.
The students, all first-year participants in an academic track for professional retraining as registered nurses, were discovered at the end of the test. Following the revelation, they were suspended from their studies.
The medical center became suspicious when test scores of otherwise mediocre students increased substantially.
Also, nursing. Future, or would-be nurses, cheating on exams. Fantastic.
I love how the campus map is some sort of galaxy