408: CalMatters Discovers Cheating
Plus, Grammarly responds, changes. Plus, a podcast you may want to check out.
Subscribe below to join 5,011 (+1) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year, suggested. You can also support The Cheat Sheet by giving through Patreon.
CalMatters Discovers AI-Assisted Cheating
CalMatters is an independent news outlet with a focus on education in California. Few outlets have been as clueless — even counterproductive — about cheating as they have been. Their article, which I covered in Issue 374, was so bad that I called for it to be retracted. The CalMatters article I covered in Issue 192 may have been worse.
On academic fraud, the outlet has been equal parts obtuse and wrong — as in factually, easily determined to be, wrong.
Anyway, someone over there seems to have discovered cheating, even writing and publishing an entire article on it, headlined:
His students suddenly started getting A’s. Did a Google AI tool go too far?
Not much in the piece is new. Still, for CalMatters, acknowledging cheating and that it may not be good, is new.
Here are the first few paragraphs:
A few months ago, a high school English teacher in Los Angeles Unified noticed something different about his students’ tests. Students who had struggled all semester were suddenly getting A’s. He suspected some were cheating, but he couldn’t figure out how.
Until a student showed him the latest version of Google Lens.
Google had recently made the visual search tool easier to use on the company’s Chrome browser. When users click on an icon hidden in the tool bar, a moveable bubble pops up. Wherever the bubble is placed, a sidebar appears with an artificial intelligence answer, description, explanation or interpretation of whatever is inside the bubble. For students, it provides an easy way to cheat on digital tests without typing in a prompt, or even leaving the page. All they have to do is click.
“I couldn’t believe it,” said teacher Dustin Stevenson. “It’s hard enough to teach in the age of AI, and now we have to navigate this?”
As the article points out, Google Lens is not new. We wrote about it in Issue 397 and Issue 398.
Also from the new story:
But some now say that AI tools, particularly Lens, have made it impossible to enforce academic integrity in the classroom — with potentially harmful long-term effects on students’ learning.
Mind you, new tools do not have anything at all to do with enforcing integrity. You either do, or you do not enforce your policies. The how of cheating is seldom relevant. Even so, the impact of cheating on learning is pretty established. If a student is cheating, they are very, very likely not learning.
The piece continues:
Then came AI, with its immense potential to enhance education — and facilitate cheating. That’s when [another teacher] decided to ditch technology altogether in his classroom and return to the basics: pencil and paper.
Look at that, CalMatters put “AI” and “facilitate cheating” in the same sentence.
The same teacher told the paper:
“We want teenagers to think independently, voice their opinions, learn to think critically,” [he] said. “But if we give them a tool that allows them to not develop those skills, I’m not sure we’re actually helping them. Can you get by in life not knowing how to write, how to express yourself? I don’t know, but I hope not.”
Translation with opinion — no, if you allow students to cheat, you are not helping them.
The article moves on:
[This teacher] is not alone, according to research from the Center for Democracy and Technology. In a recent nationwide survey, the organization found that more than 70% of teachers say that because of AI, they have concerns about whether students’ work is actually their own. Nearly 75% of teachers say they worry students aren’t learning important skills like writing, research and reading comprehension.
Have not seen this study, left the link in should you want to check it out.
The piece moves on to say that California schools lack consistent rules regarding use of AI, which has never really bothered me. Not all AI use is the same. Going 50 mph on a highway is different than going 50 mph in a school zone where kids are crossing the street. When you’re an adult, rules are always contextual. I don’t know why we fixate that rules about using AI need to be consistent.
Consider:
That confusion is the crux of the problem, said Alix Gallagher, a director at Policy Analysis for California Education who has studied AI use in schools. Because there are few clear rules about AI use, students and teachers tend to have “significantly” different views about what constitutes cheating, according to a recent report by the education nonprofit Project Tomorrow.
No, confusion about rules is not the problem. This implies that if students just knew the rules, there would be no issues. There is zero evidence to support this idea. Moreover, even having “significantly” different views about cheating is probably fine too — again, context matters. Not all AI use should be viewed the same on every assignment, in every class, at every level. Inconsistent rules are what people prefer to focus on when they don’t want to focus on the bad stuff. My opinion.
One more:
In Hillary Freeman’s government class at Piedmont High School near Oakland, AI is all but forbidden. If students use AI to write a paper, they get a zero. She only allows students to use AI to summarize complex concepts, write practice questions for a self-assessment or when Freeman explicitly permits it for a specific task.
And:
Detecting students’ use of AI is another obstacle, she said. It means spending time digging through version histories of students’ work, or using AI plagiarism screeners, which are sometimes inaccurate and more likely to flag English learners.
Addressing cheating is hard, in actual as well as emotional labor. Which is why, I believe, so many educators just don’t do it.
I appreciate CalMatters saying AI detection is “sometimes” inaccurate, rather than taking the lazy path of saying they don’t work. Bad ones are inaccurate. Good ones work. Sometimes is fair.
And not that anyone cares, but I’m still not sold on that “English learners” thing — the study everyone quotes but no one actually read. No one aside from me, I mean.
But whatever, considering it’s CalMatters, I will take this. With joy.
Finally, I share this, from Google, about their cheating accelerator, Lens:
“Students have told us they value tools that help them learn and understand things visually, so we have been running tests offering an easier way to access Lens while browsing,” said Google spokesman Craig Ewer. “We continue to work closely with educators and partners to improve the helpfulness of our tools that support the learning process.”
Gross.
Grammarly Responds, Changes
After Plagiarism Today covered Grammarly “laundering” AI-created text in their Authorship tool and we shared it (see Issue 407), the company responded and said it’s making changes.
An e-mail from Yuki Klotz-Burwell, Communications Manager at Grammarly, received Wednesday afternoon read, in part:
Thank you for sharing the article on Authorship in Plagiarism Today in your latest newsletter. I wanted to share some more context and pass along an update that we also shared with the author, Jonathan Bailey. We’re updating Authorship so that when the Humanizer tool is used on AI-generated text, it won’t be categorized as “Typed by a Human,” but instead it will show that AI was used. Users will see this change in the product next week. We’ve also already been working on updating our Gemini attribution in Authorship, and users will see this change next week: text created by Gemini will be categorized as AI-generated.
First, good. It’s refreshing to see companies change course and stop bad behavior. I’m delighted we may have played a small role in it, though most of the credit — to the extent there is any — goes to Jonathan Bailey at Plagiarism Today.
That said, Grammarly had listed AI-produced, “humanized” text, as “typed by a human” for months, maybe even a year. And I’d really like to talk to the designer or executive who thought that was fine in the first place. It says a ton to me that before the company was called out, it seemed just fine to count text created by AI on their system as “typed by a human.”
They are not alone in this, but Grammarly has two core problems and neither one is really solvable in my view. Well, three.
Grammarly wants to be an enterprise tool, for businesses to use. They want to be paid to write business e-mails, summarize meeting notes, draft reports and quarterly reviews — things which any given business may deem using AI to be entirely acceptable. The problem is that you can’t unleash ubiquitous writing tools and keep them away from students who may also want to outsource their work. Well, you can, but Grammarly does not want to, and they cannot square that circle.
The second problem is that Grammarly sells AI writing, and AI detection, and a tool to avoid AI detection. For these tools to work together, the company has to fudge one. If their detection avoidance humanizer works, their AI detection must be bad. If the humanizer is detected, it does not work. It’s like a company selling an over-the-counter drug test kit and an over-the-counter formula to avoid getting caught with a drug test — which product do you think the company is lying about? Both can’t work at the same time. But in Grammarly’s case it’s worse. They sell the drugs too.
Third, Grammarly can’t be honest about it.
They can’t stop pretending they’re in the education space — probably because their brand and market were built there. Peddling Grammarly as an education tool is probably profitable. I get it. But I’d really wish the company would stop wasting everyone’s time, saying straight-faced that they care about teaching and learning anymore. Grammarly sells AI-writing. It sells a tool designed to help people avoid AI detection. No education company would do that. And up until pretty much right now, Grammarly was telling people that text cleaned up with its “humanizer” tool was “written by a human.”
Come on.
A Podcast on Students and AI Use
Jeff Young, a skilled education journalist, has an education podcast — Learning Curve. In a recent episode, the show:
set up a table at the University of Minnesota and asked students to share how they use AI in their studies — and how they feel about the technology. Fear was the common thread — Fear of being caught, fear of not learning, and fear of AI taking away jobs or making their degrees less valuable. And in some cases the students shied away from trying AI tools even in productive ways, for fear of being accused of academic misconduct.
I confess, I have not listened to this one yet. But it is at the very top of my next-listen list. When I do, I’ll share some thoughts. I know, I know — you can’t wait. But I did not want to wait. I wanted to get this in your hands, and in your ears, quickly. There’s also a transcript at the link, in case you’d prefer to read. I usually do.
Since we’ll be listening to it together, if you do give it a review, let me know what you think. Comments are open. So too are these pages, as always, should you want to write something up. But please, and I cannot believe I have to say this, do not send me AI-created stuff. Yes, people have.
Anyway, I’m glad the issues related to AI use and AI-assisted misconduct are getting more attention. I continue to think, more than anything else, academic integrity is the education issue of our time.

