Research: AI Cheating is Nearly 3x What Students Admit
Plus, a decade-old cheating scandal in the UK. Plus, a free integrity seminar.
Issue 276
To join the 3,761 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 17 amazing people who are chipping in a few bucks via Patreon. Or joining the 32 outstanding citizens who are now paid subscribers. Thank you!
Research: Students Use Generative AI to Cheat Almost 3x What They Admit
In Issue 264, I pulled together an average of about a dozen survey results related to students using generative AI in academic work in the US, UK and Canada. The average was about 37%.
After Issue 264 came out, another survey did too, raising the average to 39%. I did not read too much into it, as I think it should be obvious that a high percentage of students are using AI to complete schoolwork, sometimes with the knowledge of their instructors, many times without.
With that rough number of 39% as prologue, there’s new research on survey results related to inappropriate or unauthorized use of AI in academic settings. The paper is from Hung Manh Nguyen and Daisaku Goto, at Hiroshima University. The data was collected in May of 2023 from nearly 1,400 undergraduate students in Vietnam — students from “the Graduate School of Education, Graduate School of Medicine and Pharmacy, Graduate School of Engineering, and Graduate School of Information Technology.” I am aware that the schools are “Graduate Schools” but the paper clearly says the sample was of undergraduates.
The pair of researchers constructed a survey instrument “to minimize social desirability bias” — the practice in which survey subjects are reluctant to admit to undesired or illicit behavior, resulting in it being undercounted and underreported. When they asked with this design, they found, from the paper summary:
Our findings reveal that students conceal AI-powered academic cheating behaviors when directly questioned, as the prevalence of cheaters observed via list experiments is almost threefold the prevalence of cheaters observed via the basic direct questioning approach
Good gravy in green Grenada.
If the baseline is anywhere near the average of those self-declared surveys — 39% — then cheating with AI is nearly universal.
It’s also, I think, necessary to note that a 3x rate of what students are actually doing versus what they will admit in a survey is not new. In Issue 52 we looked at research in Australia finding that:
When given the incentive to be truthful, two-and-a-half times more students admitted to buying and submitting ghost-written assignments than admitted to this without the incentive.
But back to the new paper from Japan.
It’s noteworthy that the survey instruments — both direct and indirect methods — asked about “cheating” with ChatGPT. So, just ChatGPT and explicitly cheating, the behavior that a survey respondent would know was disallowed. In both direct and indirect settings, students were asked if they had cheated and if they intended to cheat with ChatGPT.
Further, the research team says:
Our primary objective is to examine the magnitude of misreporting about AI-powered academic cheating behaviors among respondents.
From the paper:
Regarding the outcomes of direct questioning, only 9.6% of respondents reported that they had cheated. However, the prevalence of cheaters rose nearly threefold to 23.7% via the list experiment. The results suggest that confessing to cheating was an especially sensitive issue among students
Also from the paper:
In terms of cheating intention, no significant differences exist between the two questioning methods, as the prevalence of students reporting that they have the intention to cheat between the list experiment and the direct questioning method remains similar (21.6% and 22.5%, respectively).
I’m not sure what to make of that, to be honest. I guess it’s interesting that more than two in ten say they plan to cheat and when you ask them indirectly if they have or not, about the same percentage — 23.7% — say they have. And I guess it shows that the results from the direct questions — 9.6% — are pretty much nonsense. Which proves the main research objective.
So, maybe I do know what to make of it.
The research also broke responses out by student demographics, finding:
male students are more likely to use ChatGPT to cheat than female students in terms of cheating history
And:
35.1% of the male students reported that they had cheated, which is more than triple the prevalence of their counterparts showing the same behavior. The magnitude of the difference between the two genders is approximately 25 percentage points
The study also found little difference in behavior or intent among the sampled major fields of study.
In their conclusion, the authors write:
our findings show that students conceal academic cheating behavior under direct questioning.
And:
respondents understandably conceal truthful answers when directly questioned.
And:
we found a significant magnitude of misreporting in response to AI-powered academic cheating behaviors among undergraduates. Specifically, the prevalence of cheaters observed via list experiments is almost threefold the prevalence of cheaters observed via direct questioning.
And:
Based on our findings, we suggest potential implications that safeguard academic integrity. In terms of theoretical implications, academic cheating should be measured via the indirect questioning method, as students reasonably conceal their truthful answers due to the sensitivity of cheating issues. Educational policies for promoting academic integrity are effective only if cheating behaviors are accurately examined.
Lastly, they report:
educational institutions should further consider investing in advanced monitoring systems to detect AI-powered academic cheating. Simultaneously, the implementation of adaptive assessment methods including randomization, dynamic question generation, and algorithmic modifications is necessary to mitigate the possibility of academic dishonesty facilitated by AI.
Anyway, when you read about a survey showing that x% of students are using AI on academic work, especially if it’s presented as related to misconduct, know that the real number is very probably considerably higher.
The Guardian Rekindles Decade-Old Test Debacle
The Guardian has gone full hyperbole on what was clearly a series of bad screw-ups regarding English language exams and students at UK universities a decade ago. The paper describes those accused of misconduct as:
victims of a gross miscarriage of justice
I won’t rehash it. You can read the story. But the short version is that there was a massive cheating scandal on English proficiency tests — required for admission to many English universities. ETS, the exam provider, reported that it thought 97% of exams were compromised. At the time, the government voided thousands of student Visas in a broad action.
It is clear to me that those exams were highly compromised. Cheating happened. It is well-known that English Language tests in foreign jurisdictions tied to university admissions are widely and regularly cheated. It’s also clear that 97% seems too high. And that many of the 35,000 students who were accused may have done nothing wrong.
The paper says that 3,500 students have won appeals. Many thousands were given chances to retest. About 2,200 did.
In any case, efforts by some of the accused are ongoing. They unquestionably deserve to be cleared if the facts support that. They should also be compensated if the incorrect decisions adversely impacted them. But cheating is also serious and deeply damaging. It deserves, in my estimation, equal billing. Especially in as much as it just continues.
As such, I’ll drop a link to this news story, out today, from India. The headline and subhead are:
Seven students held for cheating in Duolingo Exam Test
The DET is a well know online English proficiency test, in the lines of TOEFL, IELTS and TOEIC, designed for internet, rather than paper-based and is accepted by over 5, 000 universities worldwide including the likes of Harvard, Stanford, MIT and Yale Universities.
It begins:
Seven students from Hyderabad were caught … impersonating and attending Duolingo Exam Test (DET) on behalf of other candidates, who were vying for admissions to international universities of USA, Ireland, and Australia
February 23 - Webinar with Dr. Tricia Bertram Gallant
The ICAI — International Center for Academic Integrity — has announced a free webinar with academic integrity expert, Dr. Tricia Bertram Gallant.
If you don’t know her:
Tricia Bertram Gallant, Ph.D. is the Director of Academic Integrity Office and Triton Testing Center at the University of California San Diego (UCSD), Board Emeritus of the International Center for Academic Integrity, and former lecturer for both UCSD and the University of San Diego. Tricia has authored, co-authored, or edited numerous articles, blogs, guides, book chapters/sections, and books on academic integrity, artificial intelligence, and ethical decision-making. Tricia regularly consults with and trains faculty, staff and students at UCSD, as well as at colleges and universities around the world, on academic integrity, artificial intelligence, and ethical decision-making.
The webinar will be February 23 at noon ET. Details are in the link above.
Class Note:
I am out of town again this week so, while I will make every effort to get an Issue out on Thursday, I may not be able to. If I cannot, the next Issue will be out next week, on Tuesday.