The Best and Worst in Academic Integrity in 2024
Quote of the Year. Best Press Coverage. Worst Press Coverage. And Person of the Year.
Happy 2025, one and all.
Cheers. And welcome to another edition of our annual Best/Worst in academic integrity.
A Year Older
We started 2024 with 3,734 subscribers and 31 paid subscribers. We start 2025 with 4,263 subscribers and 44 paid subscribers, including two at the new institutional/corporate level ($250 a year). We also have 16 regular donors on Patreon. And I am thankful for every person who subscribes, gives, reads, or shares the work we do.
In 2024, we put out 67 issues. Here are the five most read:
The Shameless, Relentless March of Corporate Cheating Providers (January)
Cheating Has Become Normal (November)
On Grammarly and "Write My Essay for Me" (October)
Must Read: At Least 59% of Arizona State Students Were Cheating an Online Computer Science Class (May)
Cases of Cheating With AI at Some UK Universities is Up 15x (November)
Biggest Stories of 2024
Grammarly — The formerly benign grammar assistant morphed into a full AI writing service, including offering human editing and a service to pre-check your work in an AI detector. Then the company went big into defending students accused of using Grammarly to cheat (See Issue 315). Last year, Grammarly joined the ranks of Chegg, ChatGPT, and Course Hero as a cheating provider and apologist.
Collapse of Chegg — Last year, Chegg saw its CEO leave (see Issue 291), along with hundreds of layoffs (see Issue 303). It settled a fraud lawsuit for $55 million after a Judge said it had likely known about massive cheating and denied it (see Issue 324 or Issue 280 ). Its stock is now worth $1.60, down from $113.
University of Texas — After being one of the only schools in the nation to report that cases of cheating went down during the pandemic, which the school said was related to its inability to detect cheating, it decided to shut down AI detection (see Issue 282). It also decided to collaborate with Grammarly on “The Faculty Guide to Getting Started With AI” (see Issue 326).
Middlebury — In Issues 312 and Issues 204, the school is in the center of a storm after acknowledging that its Honor Code was not working — that cheating was a massive problem. The school, like others, is moving to allow exam proctoring.
Administrators — Pressure is starting to build on administrations to enforce academic integrity as several news articles have cited professors who are frustrated that misconduct cases are so frequently dismissed. Consider this BBC article on a student who improperly used AI on an assignment, from Issue 320:
She was cleared as a panel ruled there wasn’t enough evidence against her, despite her having admitted using AI.
That cannot continue for long.
Quote of the Year
Honorable Mentions:
In Issue 322, from a professor at Middlebury:
“What’s interesting to me is how much they love the honor code and would be very sad if it was repealed,” she said. Students like the tradition and the assumption that they will behave with integrity. “But on the other hand, they also acknowledge that there is not a culture of social sanctions against cheating. One student was not embarrassed to tell me at a dinner in front of 10 other students, ‘Of course I’m going to cheat’ in some gen-ed, or what we call distribution requirements, class. ‘You know, some pain-in-the-butt thing that I don’t see as particularly relevant.’
From the same Issue, from a student at Middlebury:
“A big part of integrity,” she said, “is if the school doesn’t take it seriously, the students won’t either.”
Similar theme, from Issue 274, a high school student in California:
“I only have Canvas tests in two of my classes, and in one of them pretty much everyone Googles the answers, but —- are care-free because the quizzes have nearly no impact on our grades to begin with,” they explained. “On the other hand, it’s impossible in my other class where the teacher will grant an immediate zero if we even have another tab open on the browser.”
On the same theme, another student at Middlebury, from Issue 312:
“I think that it would be respected if people saw punishment for breaking the Honor Code that was harsh,” said Zeke Hooper ’25. “If people were caught and were expelled — this would prevent a lot of cheating.”
Also good, were these quotes from Benjamin Miller, a professor at the University of Sydney, in his video on generative AI and reflective and analytical writing assignments. From Issue 323:
Some educators suggest that setting reflective writing tasks lowers the incidents of students using generative AI. The concern here is that when a teacher says the use of AI is low, what they are really saying is that use of AI I can detect is low.
And, of AI’s ability to mimic reflective writing, he said:
it should be the end of thinking we can design assessments that AI cannot complete
There’s also this, in Issue 279 from Chegg’s lawyer:
"I think what the plaintiff just argued is truly extraordinary," Kristy said. "Their argument, if I heard correctly, is that Chegg's entire business model is dishonest. There are no facts to corroborate that kind of claim."
There are at least $55 million worth of facts to substantiate that kind of claim.
There was also this, from Issue 296, quoting a Dean at the University of Texas:
AI detectors are not particularly accurate detectors of text that has been generated with AI, and so we are more likely to clog our system with false accusations than to catch people using AI inappropriately.
In addition, we released a new version of our honor code earlier this year. The aim was to focus students on ‘the intentional pursuit of learning and scholarship.’ Our aim is for students to recognize that the purpose of assignments is not to create the product, but to learn the skill.
The Nile is a river in Egypt. I offered to debate someone at Texas on this, even offered to pay my own way if they’d provide the stage.
In Issue 326 is just a quote I found funny, a post on LinkedIn from Emily Nordmann:
Hear me out, you're allowed to use AI and cheat as much you like but after graduating you're only allowed to use doctors, dentists, lawyers etc who used AI as much as you whilst studying.
Heart emoji.
The Runner Up for 2024 Academic Integrity Quote(s) of the Year were in Issue 331. Inside Higher Ed ran an op-ed piece from Daniel Cryer, associate professor of English at Johnson County Community College, who wrote, speaking as a hypothetical college student:
At the end of the day, after all the lectures and policies, it seems like the weight of upholding them falls almost entirely on you. It’s empowering in a way, but you can’t help wondering if this is something students should be responsible for.
And:
But responsibilities can be burdensome and must be understood in contexts beyond the classroom.
An actual teacher wrote, in public, that when it comes to not cheating, “responsibilities can be burdensome” and asked whether “lectures and policies” are something students should be responsible for.
Un-freaking-believable.
But the 2024 Academic Integrity Quote of the Year is from Issue 328, in an opinion submission by Dr. Mindy MacLeod:
Until we can be assured that students are given opportunities to fail and that their degrees prove they know something about the subjects they are supposedly specialists in, they’re as meaningless as the essays students are submitting to swindle their way through their courses.
Worst Academic Integrity Research of 2024
Dishonorable Mention is this paper, by Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, and Soheil Feizi, all from the University of Maryland (see Issue 326.)
It was published and shared as proof that AI detectors are unreliable. It makes that case because, after repeated attempts to paraphrase the text specifically to avoid detection, the accuracy of detectors fell.
It’s riddled with errors including that it did not test the most common, most used detection systems. Also, even the systems they did test were exceptionally accurate before the team subjected the texts to repeated paraphrase washing. As I wrote in Issue 326:
before the team decided to “attack” the detectors, they were 99.3% accurate at detecting watermarked AI text with a 1% false positive rate. Ninety-nine. To one. But they are unreliable.
And, also me:
I give up. Even after seeing numbers such as 99.3%, 99.8%, and 100%, all anyone wants to talk about is how AI detection does not work.
But the Worst Piece of Academic Integrity Research for 2024 was from Issue 288, another paper on how AI detectors were unreliable because they could be evaded. The listed authors are Mike Perkins, Binh H. Vu, Darius Postma, Don Hickerson, James McGaughran, and Huy Q. Khuat, of British University Vietnam, as well as Jasper Roe, of James Cook University Singapore.
Despite several obvious errors such as misclassifying results and even misidentifying the number of test samples, the test is bonkers. Like others before it, this “study” tests detectors no one actually uses and averages the results, burying the outstanding success of several systems. Moreover, the test samples included things such as corporate blog posts and cover letters, things schools would almost never assign, let alone scan for AI.
Worse, the team’s efforts to evade detection included things such as adding spelling errors and reworking sentence structures. In one test, the team added more than 20 spelling errors to a text of 300 words. The output was so absurd that even the paper admitted the final products did:
not accurately represent the quality of work that students would submit in a real-world setting. Although these samples evaded detection by software tools, they are likely to evoke suspicion from human markers because of their poor quality, strange phrasing choices, and excessive errors.
In other words, when you turn AI text into garbage, the AI detection systems don’t work well. Shocking.
The Best Integrity Research of 2024
Honorable Mentions:
We start with this paper, which we covered in Issue 325, from Peter Scarfe, Kelly Watcham, Alasdair Clarke, and Etienne Roesch of the University of Reading in the U.K.
The team found that teachers missed 94% of assignments written by AI:
We report a rigorous, blind study in which we injected 100% AI written submissions into the examinations system in five undergraduate modules, across all years of study, for a BSc degree in Psychology at a reputable UK university. We found that 94% of our AI submissions were undetected.
There’s also the literature review of this paper by Alicia McIntire, Isaac Calvert, and Jessica Ashcraft from Brigham Young University. It is a must read (see Issue 281). The rest of the paper is not good as it suggests appealing to students rationally, that if they are cheating, they are not learning, and wasting their time and money. Spoiler alert: they do not care. But the lit review is worth a read.
A mention also for this, from Issue 287, a research paper by Jia G. Liang, of Kansas State University, George R. Watson, of Marshall University, James Sottile and Bonni A. Behrend, of Missouri State University.
The authors are noteworthy here because Watson and Sottile are the authors of a 2010 paper that is frequently misquoted to support the idea that cheating is not more common in online courses (see Issue 23). That is not what the duo found in 2010, and it’s not what they found this time either. From this paper:
The survey responses revealed that, for almost every behavior listed, students in online courses scored higher for academic dishonesty, with 42.3% admitting that they have cheated on an assignment, quiz, or test, which is an almost 50% higher score than that of live courses (28.3%).
Almost 50% higher.
Good too is the work cited in Issue 305. It’s actually coverage of research, not the paper itself. It’s still on my list to get to. But the lead author is quoted:
Dr. David Playfoot explained, "The influence of the five big personality traits is typically significant in behavioral studies. However, much to our surprise, participants' level of apathy towards their degree program overrode all of them. Our study revealed that students scoring higher on our degree apathy scale, indicating a lack of interest or engagement with their degree program, were more inclined to express a readiness to use AI tools for assignments.
"It doesn't matter if someone is generally conscientious, if they're disengaged from their degree program, they're still more likely to use AI tools for their assignments."
And
The study also explored the impact of risk and consequences on the likelihood of cheating with ChatGPT. Results showed that students were less likely to cheat when the risk of detection was high or the punishment for cheating was severe.
Big findings — if students are apathetic to the work or program, they are way more likely to cheat. But they were way less likely to cheat if risk and punishments were high. I suggest that schools can really only directly control one of those factors.
Runner up for Best Research in 2024 is this paper by Li Zhao, Yaxin Li, Junjie Peng, Jiaqi Ma, Xinchen Yang, Kang Lee, Weihao Yan, Shiqi Ke, and Liyuzhi D. Dong. Researchers Lee and Dong are with the University of Toronto, the rest are with Hangzhou Normal University, in China. The paper found:
that the basic form of unproctored exams is the least effective method to encourage academic integrity … which is commonly practiced in universities with an honor code system. The reason that no reminders are given is that students are assumed to be well aware of, and experienced with, the honor code system. The students are also assumed to appreciate the fact that they are trusted by their professors to abide by honor codes and would reciprocate their trust by acting with integrity and not cheating. Our results suggest that this assumption may be misplaced.
And:
we found that reminding students about either the university’s academic integrity policy or actual cases of academic dishonesty and their negative outcomes significantly reduces cheating
If you can even read Issue 329, in which we covered this, I recommend it.
The winner for Best Academic Integrity Research of 2024 is from Issue 276. It’s the paper from Hung Manh Nguyen and Daisaku Goto, at Hiroshima University.
The researchers studied how low rates of self-reported cheating are, when compared to actual incidents of academic misconduct — what students say they do versus what they do.
We’ve known for a long time that self-reported surveys of cheating undercount the realities. This paper found:
that students conceal AI-powered academic cheating behaviors when directly questioned, as the prevalence of cheaters observed via list experiments is almost threefold the prevalence of cheaters observed via the basic direct questioning approach
Almost 3x. So, when you see student surveys say that 40% of students admitted to using AI in violation of expectations or policies — you can do the math. Also, just to add, from the paper:
our findings show that students conceal academic cheating behavior under direct questioning.
Students not only lie on surveys, they lie when directly questioned. I think we knew that, but it’s a good reminder that just because a student says they did not cheat — the AI detector falsely accused me — it does not mean they did not cheat.
The Worst News Coverage of Academic Integrity, 2024
For Dishonorable Mentions, we begin with this this story from Inside Higher Ed, which we covered in Issue 286.
The idea is that a test proctoring service failed. It did not. And that the school uncovered a “new threat” of blackmail during contract cheating. It is not.
The story includes this gem:
But for some students, the increased institutional reliance on third-party proctoring services to deliver exams remotely has also presented a new, high-tech opportunity to cheat. Numerous contract cheating services advertised online promise the ability to bypass proctor software, take control of a paying tester’s computer and complete the exam for them—presumably with an impressive score and without detection.
Test proctoring presents a new opportunity to cheat. Unbelievable.
Not to be outdone, The 74 published a story on online exams, also decrying the use of remote proctors for online assessments (see Issue 294). The story is full of factual errors, misleading statements and biased interview subjects.
Also on the naughty list was Ed Week, with a piece on cheating and AI use (see Issue 298). Like so many other bad stories, it interviews no experts on academic integrity, dismisses cheating, and says the real fear is efforts to detect and deter cheating, not the cheating. It’s an embarrassment.
Sadly, EdWeek gets two awards this year. Its second is for this work of fiction, covered in Issue 316. The headline is: Black Students Are More Likely to Be Falsely Accused of Using AI to Cheat.
The problem is that is just not true. It is not what the report they were trying to cover said. The report in question was a survey, and it found that black students said they were more likely to have AI detectors inaccurately flag their work as AI-generated. And while that is important, someone saying a thing is not the same as it being a thing. Moreover, even if it was true that black students had work flagged more often, that is not a “false accusation” of cheating. EdWeek wrote as fact, what was not even suggested. It’s outrageous.
But the winner for Worst Academic Integrity Coverage of 2024 is this article from Bloomberg, covered in Issue 318.
The headline is “AI Detectors Falsely Accuse Students of Cheating—With Big Consequences.” It’s as bad as it sounds.
Bloomberg interviewed not one expert on integrity, thinks AI detectors make allegations — like teachers don’t exist — and provides two student examples. Both students said they did not use AI but were flagged anyway. Bloomberg’s “Big Consequences” was that one student got a warning while the other passed the class but complained about feeling really bad about being accused. Huge.
It’s an appalling parade of misinformation designed to attract outrage, not enlighten.
Best Integrity News Coverage
Seriously honorable mention to a recent article by a student in the student newspaper at Western Washington University, covered in Issue 329.
It’s fabulous and way better than 95% of the work put out by “professional” journalists at “credible” outlets. It interviews actual experts and gets causation correct — people accuse, not detectors. It reports actual numbers. In Issue 329 I wrote:
I love everything about what we’re being told in this article — just as I love how we’re told. It’s really, really good work. A student at Western Washington can do it, and so I will not accept why the people and publications who are paid to cover education can’t seem to manage it.
And the winner for Best Coverage of Academic Integrity in 2024 is this really long article in The Chronicle of Higher Education, covered in Issue 322.
It’s amazing and important. Please go read it or at least jump over and read our review of it.
The story links cheating to a perceived lack of value by students, but also to lack of punishment. It actually quotes experts on integrity — three of them. Imagine that.
Five stars, showing that it is possible to put in the work and cover this topic well.
Academic Integrity Person of the Year
Personally, I’m honorably mentioning — or mentioning their honor — the great people who help proofread and edit “The Cheat Sheet.” They do so without request for public notice, and they do great work. They know who they are. And I can tell you, with complete confidence, that The Cheat Sheet would not happen without their help.
The Person of the Year in Academic Integrity is Joseph Thibault.
Joseph is a Friend of The Cheat Sheet, and founder of integrity company Cursive Technology.
But neither of those is why he’s the POY. Joseph is the POY because he’s started his own newsletter on academic integrity — This is Not Fine.
Joseph and his colleague Sam Silverman have also started an integrity action campaign aimed at getting internet platforms and publishers to take down ads for contract cheating services. He also has put together a database of essay mills and the list of contract cheating companies using college and university logos and names (see Issue 330). These take actual, largely uncompensated work, for which he deserves recognition.
Not to take any spotlight away from Joseph, I think it says something that this work comes from someone outside the academy — not a teacher, not a Dean, not a school. And it upsets me when I see the work outsiders are putting in to protect academic integrity, compared to seeing so little from those inside education, those who are significantly more threatened and cheated. It’s not that no one inside education is taking action, some are. But far too few and far too timidly, in my view.
In any case, Joseph is doing great work, and it should be noted and said.
Final Notes of 2024
This cheating post on the University of Oregon website that we covered in Issue 307 — it’s still up.
I also still need some help, as mentioned in Issue 324, in finding out what schools have entered a “direct-to-institution” partnership with Chegg. There are four. If anyone knows, please tell me. You can be anonymous.
That’s it. Happy New Year, everyone. Thank you again and please accept my very best wishes for your best year ever in 2025.
It’s very important you know, i used the service of JBEE SPY TEAM on ig to hack my wife’s phone. And it went well. I am currently in her phone without her knowing.
You can also reach them on telegram +44 7456 058620, email mailto:conleyjbeespy606@gmail.com
Wow, this is the first time anyone has ever suggested that I'm worthy of a POY.
I'm humbled and thankful.
Derek, thanks for all of the air play that The Cheat Sheet has given to me and the small but formidable army of action-takers we've worked to build at 'This isn't Fine' in 2024.
If there's anything I can share with readers here or there, it's that cheating is a choice no matter the circumstances... just as action against cheating is an action. Integrity, like trust, takes effort, tending, and most of all consistency over time. Keep at it!