Increased Cheating Costs US Public Colleges $196 Million in Annual Staff Costs
Plus, Ed Weak. Plus, integrity researcher Thomas Lancaster joins up with Chegg
Issue 298
Subscribe below to join 3,851 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 18 amazing people who are chipping in a few bucks via Patreon. Or joining the 38 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month. Thank you!
Increase in Cheating Costs American Public Colleges $196 Million in Staff Time, $15.8 Million for Public U.K. Colleges
HEPI, is the Higher Education Policy Institute, “The UK's only independent think tank devoted to higher education.”
In March, HEPI published a post by Dr. Fawad Khaleel, Dr. Patrick Harte, and Dr. Sarah Borthwick Saddler, all at the Business School of Edinburgh Napier University. In it, the trio aimed to estimate what dealing with academic integrity cases costs colleges and universities.
But before getting to their estimates, the piece has several newsworthy items, such as:
The recent exponential increase in academic integrity breaches due to the use of generative AI has resulted in rampant, unnoticed costs.
I like that the team does not even question whether generative AI has increased cheating. They characterize it as simply an “exponential increase.”
The team is direct and, in my view, in the right place. They report:
However, since the public launch of generative AI, breaches have increased. For instance, Abertay University experienced a 411% increase, as breaches of academic integrity jumped from 36 cases in 2020/21 to 184 cases in 2022/23. Similarly, the breaches of academic integrity within TNE and online provision of Herriot Watt University increased from 323 cases in 2020/21 to 901 cases in 2021/22. In 2022/23 out of 952 breaches that were investigated, 825 necessitated viva examination, which required attendance of a minimum of 4 staff members (academic and admin). Similar is the situation in other UK HEIs as Glasgow Caledonian University had 422 cases in 20/21, which increased to 742 in 21/22 and 711 in 22/23, while at Edinburgh Napier University the Business School alone has had 1395 cases in the last two years. Over the same two years, the University of Stirling had 1827 cases and the University of Edinburgh has recorded 1552 cases of academic dishonesty. Most Universities do not record the specific data of academic dishonesty due to the use of generative AI nor on the number of viva examinations conducted to investigate the breaches.
This increase coincides with the growing availability of AI tools: OpenAI’s GPT-2 was launched in 2019, GPT-3 in 2020, and Jasper AI in 2021. Other available tools included QuillBot, Article Forge and Scribendi, which were specifically designed either as paraphrasing tools, essay-generating tools or both. The trends were accelerated with the launch of ChatGPT in November 2022 and Google Bard, now Google Gemini, in February 2023. Though we cannot be confident that AI is fully responsible for the increase, the sharp rise just as AI tools are made available suggest it is a significant factor.
Good gravy.
Abertay University, a 411% increase
TNE and Herriot Watt University, 278% increase
Glasgow Caledonian University, 68% increase.
Thirteen, fifteen, eighteen hundred cases.
Absorb the cold recitation of those increases and numbers. They will make the next story from Ed Week pretty funny.
In any case — yes. Cheating is up. And, it’s likely related to the easy access to generative AI tools.
I also circle that the writing team listed QuillBot among the cheating tools, naturally. QuillBot is backed by investors and owned by cheating conglomerate Course Hero, now Learneo. The investors, no doubt, are proud.
On to their cost estimates, the publication says:
a university experiencing 1000 cases a year which must be investigated through a viva examination will cost the University 933 hrs in academic time and 1764hrs in administrative time (2697hrs in total).
That’s based on a review of cases at their school.
This, they say:
This translates to a total annual cost of £95,181.06
That’s $121,500 per 1,000 cases.
They continue the calculations:
If we generalise this cost over all the public universities in the UK it costs £12.4 million per year to the British economy and $196 million per year to the US economy.
It’s not clear to me whether the $196 million is just public schools in the U.S. In the headline, I assumed it’s what they meant. I’ve played around with the numbers in the US and I am not sure how they got that figure. If I learn more, I will share.
In any case, my fear about these kinds of assessments is that when administrators see academic integrity cases in dollars, it will be even more incentive to not file them — call it the University of Texas approach. That is, to simply look away.
The better course, of course, is to minimize these costs by deterring and preventing misconduct. That’s not free. But it does work. A 2021 study (see Issue before I started numbering them) showed that every dollar a school invests in preventing misconduct saves $5 in legal, administrative, and reputation costs.
And though it may be obvious, I think it important to footnote that these costs are not inclusive. Every case of misconduct carries cost, quantifiable or not.
Ed Weak
We don’t intersect with Ed Week very often, mostly because the publication’s focus is K-12. That, and, like most education outlets, they write about academic misconduct very, very infrequently.
But a few weeks ago, Ed Week waded into those waters with a piece (subscription required) on, what else, cheating and AI use.
As you can tell, it’s not very good.
To start, Ed Week interviews not one outside expert on academic integrity.
The piece jumps off by citing the relatively recent data on AI in academic work from Turnitin showing that, in one year, six million papers showed evidence of being at least 80% composed by AI and that a jaw-dropping 22 million papers had at least 20% of their content identified as likely generated by AI (see Issue 287).
By the third line of the story, Ed Week takes a look at six million likely egregious cases of academic fraud and 22 million cases of probably significant misconduct and decides that AI use by students:
may not be as bad as educators think it is
I am no educator. But I think those numbers are an outrage, a scandal of preposterous proportion. I will add here as well that the 3% threshold, the six million papers at 80% or more AI, are the papers turned in when students knew their papers were being scanned by Turnitin. Guess what that percentage is when schools aren’t checking for AI.
Of those six million papers at 80% or more, Ed Week says:
only 3 out of every 100 assignments were generated mostly by AI
If by “mostly” they mean at least 80% — sure. But this means that a paper with 75% likely AI content would not be, according to Ed Week, “generated mostly by AI.”
And, of course, only. Relax everyone, only 3% of academic work, grades and degrees are utterly fabricated. Nothing to see here. And again, that’s the 3% that are in the system, the 3% we can see.
Not 100 words into Ed Week’s coverage, it has dismissed the cheating crisis. Twice.
Using an image due to some technology hurdles, the piece continues:
First, no. Ed Week is saying that, because the rate of AI flags on Turnitin’s system has been rather consistent over the past year, it shows that cheating did not go up due to AI. That’s illogical.
The data supports the very likely possibility that cheating was, let’s say, a 7 before ChatGPT but went to an 11 after. When Turnitin activated its detection a bit later, it measured the 11. That it stayed at 11 for year is no evidence at all about whether or not cheating increased before Turnitin started measuring.
Come on. This is seventh grade logic and Ed Week covers education.
Moreover, that study from Stanford is dubious (see Issue 261) and quite possibly, as an outside observer noted, clear evidence of the ceiling effect. If a self-report survey is consistently showing that “60 and 70 percent of students admitted to cheating,” that’s really 100% because we know that self-report surveys reliably undercount. In other words, more people cannot be cheating with AI because everyone is already cheating.
But more importantly, like other publications before them, Ed Week just flitters on by the finding that 60 or 70% of high school students admit to cheating. I am surprised they did not add an “only.”
Then Ed Week goes full bold subhead and declares:
Experts warn against fixating on cheating and plagiarism
Again, no academic integrity expert said that.
I constantly find it hard to wrap my head around major news outlets telling people to not worry about cheating. But Ed Week is all in on the idea that, rather than cheating, it’s efforts to prevent and detect cheating that we should really worry about:
AI detection is becoming popular. We should worry. Fixating on cheating is “the wrong focus.”
Then there is the utterly indefensible assertion that there is “scant evidence that AI is fueling a wave in cheating.” That’s the tie-in from the story above. Any scan of actual data from schools would disprove that instantly. Interviewing a single academic integrity officer or integrity researcher would disprove it instantly. But Ed Week didn’t do either. They went the other way — don’t worry about cheating. Worry about efforts to stop cheating.
It’s gross. And offensive to fairness, honesty, equity, and the fabric of education attainment.
One expert Ed Week did quote, Tara Nattrass, an expert in “innovation and strategy,” said:
Again, the real problem is the anti-cheating tools, not the cheating. And that study that Ed Week cited is an absolute joke (see Issue 216 and Issue 251). It literally — and I am not kidding — has made-up citations. No one seems to care. But they do keep citing it.
Anyway, the story is terrible. And terribly predictable. I’m embarrassed for Ed Week.
Academic Integrity Researcher Thomas Lancaster Joins Chegg Advisory Board
Cheating giant Chegg recently announced that Dr. Thomas Lancaster, of Imperial College London, and a known voice in academic integrity research, has joined the company’s “Academic Advisory Board in UK and Australia.”
It is a surprising move.
I e-mailed Lancaster to ask for a comment on his partnership with the most notable cheating provider and, so far, he has not replied. I know others in the integrity space have also reached out and, as far as I know, have not received a response either.
The first thing I thought was the farce of Big Oil funding climate research or, earlier, in the 1970s and 1980s — and beyond? — when Big Tobacco funded research into nicotine addiction and cancer.
My favorite anecdote from this disaster of misinformation was the tobacco-funded study that came pretty close to saying that smoking prevented Alzheimer’s. Extensive research had led the research teams to find almost no cases of Alzheimer’s among frequent smokers. The thing was, this was true. But only because most frequent smokers died of lung or throat cancer long before they could develop Alzheimer’s.
My point is that it is really easy to compromise research when those doing the research rub elbows with companies who stand to profit from it. I have zero sense as to whether Chegg will fund Lancaster’s work or whether this advisory board position is paid or not. But that’s the point. The mere questions already compromise everything. From now on, Lancaster’s research will be tainted. Everything he writes about academic integrity will be tagged with some version of his affiliation with Chegg, with a company selling cheating.
I genuinely don’t get what Lancaster gets out of this deal, selling his objectivity and credibility to Chegg. Frankly, I hope it’s buckets of money. At least that would make sense.
On the other side, it’s easy to see what Chegg gets — the ability to claim credibility, silly as that is. Standing with otherwise respected people and organizations, whitewashing their reputation, has been Chegg’s game for a long, long time. Facing multiple legal challenges related to their cheating profits (see Issue 280), now is a good time to bring in someone who is clean, who can shield the company on this front.
Unfortunately, they found one.
Class Note:
Pressed for time, I asked ChatGPT to proofread this Issue. It did a horrible job. Even so, since I have asked that people using generative AI disclose it, I asked it to scan for misspellings and grammar errors. It found one spelling error, one double word use, and one incorrect word — “comprise” instead of “compromise.”
Regarding HEPI numbers, I'm confident they used 1,626 public degree granting institutions as listed here: https://www.appily.com/colleges/state#:~:text=How%20Many%20Public%20Universities%20in,operated%20by%20the%20state%20government.
196m / 1626 = $120.5k / institution