334: The Guardian's Clown Show
Plus, meet Max of Pangram. Plus, more downgrades on Chegg stock.
Issue 334
Subscribe below to join 4,276 (+4) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 45 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month or $80 a year, and corporate or institutional subscriptions are $240 for a year. Thank you!
The Guardian Baited Actual Reporting, Switched to Garbage
I admit, this article in The Guardian, from December, fooled me.
On top is the promise of something new, an honest article “on a broken system,” as they put it. But the piece itself is one we’ve read a thousand times, the same baseless junk about the same stuff, full of sound and fury — well, you know the rest.
To see why it took me in, here are the headline and subhead:
‘I received a first but it felt tainted and undeserved’: inside the university AI cheating crisis
More than half of students are now using generative AI, casting a shadow over campuses as tutors and wstudents turn on each other and hardworking learners are caught in the flak. Will Coldwell reports on a broken system
Undeserved and tainted? Yes! More than half. Yes. Casting a shadow? Yes again. Hardworking learners are caught? They are. A broken system? Preach.
But the article itself is nothing but nonsense about how unnamed students are being accused of using AI to cheat, even though they “hadn’t cheated.” The evidence for the not cheating part — as it always is — is only that the student says they didn’t cheat. And we believe them because, if we did not, there would be no story.
It’s tedious and lazy.
It starts with “Albert,” a student, who received an e-mail “out of the blue” and was “stunned” to be “accused” of cheating with AI and threatened with “an automatic fail.”
But, The Guardian tells us:
The problem was, he hadn’t cheated.
Going forward, all teachers who suspect academic misconduct should just ring up The Guardian. They know.
How many times are we going to have to read how a student — quoting The Guardian — “was distraught” and had “worked hard” on their assignment and “certainly didn’t use AI to write it”? We have no one from the school, or a teacher, in this anecdote. It’s just the student telling a one-side-only story about how “having to defend himself cut deep.” And how:
he was grilled for half an hour about his assignment. It had been months since he’d submitted the essay and he felt conscious he couldn’t answer the questions as confidently as he’d like
And how the student:
didn’t feel able to defend himself until the end, by which point he was on the verge of tears
Sure.
I’m not saying a misconduct hearing cannot be stressful. I am saying that when you only get the story from the side of a highly interested party, it’s not a factual accounting, it’s storytelling. Not journalism.
Remember that tease up top about how half of students are using generative AI? Well, in the story, The Guardian walks that back:
More than half of students now use generative AI to help with their assessments, according to a survey by the Higher Education Policy Institute, and about 5% of students admit using it to cheat.
First — “admit” is a key word there. Five percent admitting it is different from 5% doing it. News flash: students do not admit to cheating. But there are also many, many sources that put the “cheat” percentage many times higher than 5%. No one, and I mean no one, believes that just 5% of students are cheating with AI.
Anyway, after running through the quick history of generative AI and the arms race to catch it and so on, The Guardian writes — and I kid you not, these are consecutive sentences:
But is ChatGPT really the problem universities need to grapple with? Or is it something deeper?
Albert is not the only student to find himself wrongly accused of using AI.
Got it. The problem is not ChatGPT, it’s the students who are “wrongfully accused” of cheating. Although there is no evidence at all that the accusations are wrong.
The paper goes on, surprising no one, to quote the discredited Stanford University study about non-native speakers and flags for AI (see Issue 216). The Guardian continues with:
the case of a student with autism spectrum disorder whose work had been falsely flagged by a detection tool as being written by AI.
Again, the only evidence for being “falsely flagged” is that she said she was innocent. But based on this one case, The Guardian writes:
Neurodivergent students, as well as those who write using simpler language and syntax, appear to be disproportionately affected by these systems.
An entire group of students “appear to be disproportionately affected” because of this one case with no actual evidence one way or another. Frankly, that’s just made up. The Guardian literally made that up.
This ridiculous effort goes on:
Dr Mike Perkins, a generative AI researcher at British University Vietnam, believes there are “significant limitations” to AI detection software. “All the research says time and time again that these tools are unreliable,” he told me. “And they are very easily tricked.” His own investigation found that AI detectors could detect AI text with an accuracy of 39.5%. Following simple evasion techniques – such as minor manipulation to the text – the accuracy dropped to just 22.1%.
The referenced research here is the jaw-droppingly absurd work which I named “The Worst Piece of Academic Integrity Research of 2024” (see Issue NY24/25 or the original one, Issue 288). And no, “all the research” absolutely does not say that “these tools are unreliable.” In fact, no research anywhere shows anything other than that the good AI detectors are quite good at detecting AI.
In fact, if you’re really up for some fun, jump over to Issue 253 to see information about the other paper from Dr. Perkins. In this one, he and other authors test whether AI detectors and human instructors can spot AI after the team tries to “trick” the detectors by adding spelling errors and similar tactics. Perkins found:
Although Turnitin correctly identified 91% of the papers as containing AI-generated content, faculty members formally reported only 54.5% of the papers as potential cases of academic misconduct.
What was that about unreliable? His own research shows the world’s most popular academic AI detector was 91% accurate, even with evasion tactics. The people, even when they were alerted to the presence of AI papers, were only 55% right.
Whatever you say.
Then, The Guardian goes into the case of “Emma,” another student. Cutting it short, Emma used ChatGPT to complete an assignment and turned it in for credit. Turnitin caught the AI-generated text — even though they are so unreliable. Caught, Emma confessed.
But The Guardian wants to be sure we understand that Emma is the victim anyway, as they describe her as “a single parent” who was “struggling.” Continuing:
Studies, childcare, household chores… she was also squeezing in time to apply for part-time jobs to keep herself financially afloat. Amid all this, with deadlines stacking up, she’d been slowly lured in by the siren call of ChatGPT.
And that Emma only turned to academic fraud:
when a bout of sickness led her to fall behind on her studies, and her mental capacity had run dry
To her credit, Emma says:
I knew what I was doing was wrong, but that feeling was completely overpowered by exhaustion
Well, at least we get that. Anyway, about Emma, The Guardian says:
Her mitigating circumstances seemed to be taken into account and, though it surprised her – particularly since she had admitted to using ChatGPT – the panel decided that the specific claim of plagiarism could not be substantiated.
So, nothing. She confessed. And nothing.
This may be the same student from Issue 320, who admitted to using AI on an academic assignment and was not sanctioned in any way. That student, from BBC reporting, “In her desperation - and also suffering with Covid while facing the deadlines - she said she turned to artificial intelligence (AI) to help her write one of the essays.”
So, here we have an example — maybe the same example — of a student who did use AI, did cheat, was caught, and nothing happened. Conveniently, The Guardian does not say what happened to first student, Albert. Or to the student with autism. Both said they were accused of cheating with AI but did not. If a student who admits to cheating with AI gets nothing, it’s hard to imagine either one of the others was meaningfully sanctioned.
Wait, my mistake. The Guardian says of Emma, the student who admitted to trying to cheat:
The whole experience shook her
A student tried to cheat, was caught, and the experience shook her. Frankly, I hope it did. A genuine crisis The Guardian has unearthed.
Near the end — at last — The Guardian writes:
Cheating or not, an atmosphere of suspicion has cast a shadow over campuses.
Cheating. It’s definitely the cheating.
But The Guardian cannot let go and tells the story of “one student” who:
had been pulled into a misconduct hearing – despite having a low score on Turnitin’s AI detection tool – after a tutor was convinced the student had used ChatGPT, because some of his points had been structured in a list, which the chatbot has a tendency to do.
Oh, I see. When Turnitin spits out a low score, we believe it. Then, it’s accurate. Got it.
Anyway, the piece is a joke, bow to stern. And again, we see that the paper spoke with not one single expert on academic integrity.
The only redeeming blip in the story is this, from one unnamed professor who:
conveyed frustration that her university didn’t seem to be taking academic misconduct seriously any more; she had received a “whispered warning” that she was no longer to refer cases where AI was suspected to the central disciplinary board
To me, that’s a story. Teachers are incorrectly being told AI detection is not reliable, advised not to use it, advised not to turn students in who may be using AI to commit fraud, and, even if they do turn them in, nothing happens.
To which, my mistake again. The Guardian does loop us in on what happened with our friend Albert, the student from the start:
Albert had to wait nervously for two months before he found out, thankfully, that he’d passed the module.
So, nothing happened.
Great story you got there, Guardian. And it would be one thing if you just got the story wrong, which you did. But in the process you made it easier to get away with academic misconduct. Good work.
Introduction to Max, at Pangram
Space at The Cheat Sheet is always open to those with ideas, opinions, or solutions in the academic integrity space. There is no charge, as I consider sharing information to be our mission. If you have something to say or share, please contact me.
With that, the below is from Max, of Pangram, a new AI detection technology. It is unedited.
I had a quick demo, and I am intrigued by the company’s claim that it has a significantly lower rate of false positives — inaccurately flagging human text as being created by AI.
Max’s text:
Hi, I'm Max, founder of Pangram Labs. At Pangram, we're building a new generation of AI detection tools - tools that are significantly more accurate than existing methods.
We're a mission-driven team of former Stanford researchers building tools to preserve integrity and combat student cheating. Third party studies have shown Pangram to be the most accurate AI detection tool. Pangram has documented lower false positive rates than other detection tools on the market: around 1 in 10,000 human-written essays are incorrectly flagged as AI, a 50x improvement over Turnitin's self-reported 0.5% false positive rate.
Pangram is currently used by educators across 76 schools and universities.
We work with several different enterprise customers as well:
Quora, to filter out millions of AI-generated posts in real time.
The Transparency Company, to identify AI-generated fake reviews.
NewsGuard, to identify AI-generated news sites spreading misinformation.
Pangram uses deep learning techniques to reduce false positive rate and remove bias against language learners. Our AI detection models have higher accuracy in detecting outputs from highly capable models like OpenAI's GPT-4o, and o1, as well as a significantly lower error rate on student writing.
Educators can use Pangram through our dashboard (pangram.com), Chrome extension, Google Classroom or Canvas integrations.
I believe the best way to evaluate a technology is to just try it, which is why we're running a series of demos for subscribers to The Cheat Sheet. We'll include a free subscription to Pangram for those who join. I'd recommend trying Pangram alongside whatever AI detection tool you currently use, so you can see the difference in accuracy yourself.
Please sign up for a demo if you're interested, and feel free to reach out to me at max@pangram.com if you have any questions!
Morgan Stanley Downgrades Chegg Stock
This morning, influential investment firm Morgan Stanley downgraded the stock of cheating company Chegg.
It made me chuckle because I don’t know how you downgrade dead. Personally, I pictured some finance bro standing over the corpse of Chegg, giving it a little nudge with the foot and saying, “yep, not looking great.”
According to the reporting, Morgan Stanley now thinks Chegg stock ought to be worth $1.25 a share, down from where it’s trading — in the $1.60 range.
Not too long ago, if you search your brain files, Chegg stock was trading at more than $110 a share as its leaders told everyone — with a straight face, mind you — that the reason the company was making so much money was because colleges were terrible and not meeting the needs of the super-special learners of today. They were not making money from easy cheating in remote and online classes. Not at all. How dare you suggest that. What surprised me about that was not that they said it, but that people believed it.
In any case, the Morgan Stanley news is not the only anchor sinking Chegg’s stock. From the article linked above we also get:
CHGG has been the topic of a number of other reports. The Goldman Sachs Group dropped their target price on shares of Chegg from $3.75 to $1.75 and set a "neutral" rating on the stock in a report on Thursday, November 14th. Northland Securities dropped their price objective on Chegg from $4.00 to $3.00 and set a "market perform" rating on the stock in a research note on Wednesday, November 13th. Needham & Company LLC reiterated a "hold" rating on shares of Chegg in a research note on Wednesday, November 13th. Craig Hallum dropped their price target on Chegg from $3.00 to $1.50 and set a "hold" rating on the stock in a research note on Wednesday, November 13th. Finally, Piper Sandler cut their price target on Chegg from $2.00 to $1.50 and set an "underweight" rating for the company in a report on Thursday, November 14th.
Yikes. Even though they are way, way late to this, the market has finally arrived. Once again I say — hope you shorted.