345: 88% of Students Say They Use Generative AI on Academic Work
Plus, AI may have already killed online, asynchronous classes. Plus, Harvard has a job. Plus, class notes.
Issue 345
Subscribe below to join 4,357 (+11) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 47 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month or $80 a year, and corporate or institutional subscriptions are $240 a year. Thank you!
Survey: 88% of Students Say They Use Generative AI on Academic Assignments
I’m starting this article with an apology — if you’re going to the ICAI conference this week, you’re going to hear about this again.
Last week, HEPI, the Higher Education Policy Institute, based in the UK, released its 2025 survey of student use of AI. It’s an update to their 2024 survey. The sample size is more than 1,000 students, and the survey was taken in December, 2024.
It’s both jaw-dropping and rage-inducing.
Foremost, use of generative AI on academic assignments by college students, at least in the UK, is essentially universal:
we find that the student use of AI has surged in the last year, with almost all students (92%) now using AI in some form, up from 66% in 2024, and some 88% having used GenAI for assessments, up from 53% in 2024. The main uses of GenAI are explaining concepts, summarising articles and suggesting research ideas, but a significant number of students – 18% – have included AI-generated text directly in their work.
Surged. Ninety-two percent use AI and 88% say they use AI “for assessments.” One in six tell the survey-takers that they “have included AI-generated text directly in their work.”
Knowing, as we do, that self-reported surveys undercount the rate of cheating by as much as 2.5x or 3x (see Issue 276), the 88 is likely 100, and the 18 may be more than half.
With this data, the forward to the survey completely spins off the planet as Janice Kay, Director of something called Higher Futures, writes:
It is a positive sign overall: many students have learned more about using tools effectively and ethically and there is little evidence here that AI tools are being misused to cheat and play the system.
Where is the data that students “learned more about using tools effectively and ethically?” Even taking the 18% who admit to using AI text directly in assignments as just 18% — that’s still one in every six students, closer to one in five. “Little evidence,” she says.
On the very next page, the survey report links to a news article and says succinctly:
Meanwhile, AI-related academic misconduct cases have soared.
Ah, yeah Janice. F-ing soared.
The report continues:
The largest increase has been in the use of AI to summarise articles and this is now the second most popular use of GenAI, up from a third in 2024. One-quarter of students use AI-generated text to help them draft assessments and nearly a fifth of students (18%) use AI-generated and edited text in their assessments. Overall, some 88% of students have used generative AI to help in some way with their assessments.
Nearly half (48%) of students said they used AI to summarize articles, i.e., to not do the reading. A quarter said they used AI to “help them draft assessments.” Although HEPI’s categories here leave much to be desired:
Use in assessment after editing
Use in assessment after editing with AI
Use in assessment without editing
I’m not sure what those mean. I’m sure students don’t either.
By the way, 39% of students said they used AI for:
Enhancing and editing your writing e.g. Grammarly
Four in ten.
I’m not saying that using Grammarly and similar tools to “enhance” your writing is unethical. But when writing is intended to assess mastery or competency, I do know that getting Grammarly to “enhance” your work is not necessarily ethical. If it’s not your work, but you say it is, and you accept the grade for it, I think many people would consider that unethical.
I’m one of those people.
Anyway — 39% of students admit to doing that.
The survey also asks for reasons students may be encouraged to use AI “for your studies.” The top two answers are:
To save me time (51%)
To improve the quality of my work (50%)
Why not? Steroids can improve your performance without all that wasted time training.
The number seven answer for why students may want to use AI in their studies:
I learn more if I use AI than if I don’t (20%).
The survey also asks why students may be less likely to use AI in their studies. The top answer:
Being accused of cheating by my institution (53%)
I find it very insightful that the survey does not ask about being caught cheating, only about being accused of cheating. Like it’s inconceivable that someone may consider using AI on an assessment to be actual cheating. The accusation is the thing. I did not do anything wrong, just summarizing, structuring, and enhancing. But I am worried about being accused of cheating.
Quickly I will also note that the top reason students may not use generative AI is the potential of discovery and consequence. Imagine that.
I mean, whatever. You can dig through the data, should you be so inclined.
But here is the headline, which I irresponsibly buried — HEPI is very biased in favor of AI, and significantly disinclined to take academic misconduct seriously. The two are related. Despite this, HEPI found that 88% of students admit to using generative AI on their assessments, where 88% is almost certainly 100%.
HEPI, remember, partners with Chegg (see Issue 335). Then there’s the nonsense up top from Janice. There is also this, from page eight of this new survey report:
We pointed out in the 2024 Survey that the tools available to detect AI use in assessments are unreliable and frequently generate false positives
Also on page ten:
AI detectors have struggled with ‘false positives’, labelling text as AI-generated when it was in fact written by a human
Usually, I say that this kind of thing is untrue. But here, I’ll go further. This is intentional misinformation. HEPI surely knows this is not true. Frequently? Come on.
We also have this gem:
At least for the time being, institutions appear to find concerns about cheating more pressing than the need to support students to develop AI skills.
Let me just ask, for the sake of argument, if schools put cheating behind learning any skill — how could we know if students learned it? Plus also, “concerns about cheating.” Not cheating, concerns about it. As in, those are just your feelings — get past them. Let’s talk about AI skills!
Boy do I hate that.
But I’m not done. There’s also absolute clown-level buffoonery:
Institutions have maintained a good record on protecting the integrity of assessments, with 80% agreeing their institution has a clear AI policy and 76% saying their institution would spot the use of AI in assessed work – both increases from the 2024 Survey
A clear policy equals a good record on integrity? Oh, come on. And — to repeat — 88% of students told you that they are using generative AI on their assignments. How is that “a good record on protecting integrity?”
Moreover, the 76% figure is a bit confusing to unpack since it implies that the ability to detect AI work is related to integrity. It is, but HEPI clearly thinks this is a flawed approach, suggesting instead that:
institutions should not adopt a mainly punitive approach; instead, their AI policies should reflect that AI use by students is inevitable and often beneficial.
HEPI suggests again that:
Institutions should adopt a nuanced policy which reflects the fact that student use of AI is inevitable and often beneficial.
Inevitable and often beneficial decoded: stop fighting. Stop having your concerns.
Overall, HEPI is whatever is two steps past divorced from reality. By design, I think. But whatever.
Eighty-eight percent of UK students admit to using generative AI on their assessments. One quarter say they use AI to help draft assessments.
Other than that, how was the play, Mrs. Lincoln?
Some Troubling Foresight About Online, Asynchronous Classes and Misconduct
David Wiley, the Chief Academic Officer of Lumen Learning, has a blog post that is worth a moment of your attention.
It’s about the not-too-far-off realities of technology cheating in online, asynchronous classes.
Honestly, I think this reality is already here. But even if I am wrong, I’m not too wrong. It’s not if, but when, as Wiley writes:
All the technology necessary for an “AI student agent” to autonomously complete a fully asynchronous online course already exists today. I’m not talking about an “unsophisticated” kind of cheating where a student uses ChatGPT to write their history essay. I’m talking about an LLM opening the student’s web browser, logging into Canvas, navigating through the course, checking the course calendar, reading, replying to, and making posts in discussion forums, completing and submitting written assignments, taking quizzes, and doing literally everything fully autonomously – without any intervention from the learner whatsoever.
Putting these pieces together to build an AI student agent will require some technical sophistication. But in terms of overall difficulty, it feels like the kind of thing that could be done by a team of two during a weekend AI Hackathon.
The italicizing is original to the article.
Because this is true, schools and course providers are going to have to get way, way more serious about securing these courses and the programs and degrees they underpin. Human intervention is going to be necessary. Problem is, that costs money and these kinds of courses were invented, and are offered, to scale — where scale stands in for the words be and profitable.
However these chips fall, it’s important to start digesting that the era of unsupervised, online, autopilot, self-paced learning may be drawing down. And it is possible that I am missing it, but I do not sense that anyone is preparing for that reality.
European Conference on Ethics and Integrity in Academia, in June, in Sweden
Details have been released for the next ECEIA conference:
We are delighted to invite you to the European Conference on Ethics and Integrity in Academia ECEIA 2025, co -organised by the Centre of Research Ethics & Bioethics at Uppsala University in Sweden as the 11th annual conference of the European Network for Academic Integrity (ENAI). This esteemed event will take place from 16th to 19th June 2025 in Uppsala, Sweden.
Sorry, these are details for the esteemed ECEIA conference. My bad.
Although, in fairness, Sweden in mid-June does seem like a pretty amazing deal.
Harvard Seeks Associate Dean for Academic Integrity and Student Conduct
Harvard has posted a job opening for an Associate Dean of Academic Integrity and Student Conduct.
From the posting:
The successful candidate will be a senior resource at Harvard for those seeking guidance on academic integrity and disciplinary issues. The Associate Dean will demonstrate and apply knowledge of research on academic integrity and student development, and be able to manage a system for adjudicating complex and sensitive community standard cases. The successful candidate will work with administrators and faculty to shape College policy on academic integrity and student discipline and maintain a community committed to academic integrity and honor.
Fine, I’ll do it. Stop asking. It’s getting embarrassing.
Anyway, it seems like a dream job, to be honest.
Also, I’m not sure if this is a new position, but good for Harvard for making academic integrity someone’s job. I understand that, for many schools, a position such as this may not be a priority when under real financial pressures. But for those schools that can afford it, there’s no reason not to do this. Your budgets are your priorities.
Class Notes
There won’t be an issue of The Cheat Sheet this Thursday, as I’ll be in Chicago for the annual International Center for Academic Integrity fest.
Related to the Harvard job above, if you have an integrity-related job posting, send it over. We’re happy to share it.
https://thewalrus.ca/i-used-to-teach-students-now-i-catch-chatgpt-cheats/