397: Google Temporarily Disables Cheat Button on its Chrome Browser
Plus, a university in Australia reports a 219% increase in AI misuse. Plus, the cheating is off the charts. Plus, survey: 40% of K-12 say they used AI without permission.
Issue 397
Subscribe below to join 4,896 (+19) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
Google Added a Cheat Button to Chrome. It’s Off Now, But Only Temporarily
Reporting in the Washington Post (subscription required) chronicles the craziness of Google adding a “homework help” button to their nearly ubiquitous Chrome web browser — a button that showed up in major learning management systems during quizzes and tests.
The button, according to the reporting, would scan a student’s screen, find the question, and deliver the answer. It appeared, with little notice, on September 2, the story says.
Here’s the first paragraph of the article:
A student taking an online quiz sees a button appear in their Chrome browser: “homework help.” Soon, Google’s artificial intelligence has read the question on-screen and suggests “choice B” as the answer.
That’s not help. That’s cheating.
Also from the story:
The button has been appearing automatically on the kinds of course websites used by the majority of American college students and many high-schoolers, too. Pressing it launches Google Lens, a service that reads what’s on the page and can provide an “AI Overview” answer to questions — including during tests.
The news is not only that the button existed, but that it seemingly showed up only when Google sensed someone was taking an academic assessment. And that Google has “paused” it. But there are other, related parts of the coverage worth sharing. For example:
“Google is undermining academic integrity by shoving AI in students’ faces during exams,” says Ian Linkletter, a librarian at the British Columbia Institute of Technology who first flagged the issue to me.
It’s true, this feature from Google unquestionably undermines academic integrity.
It’s also unquestionable that quoting Ian Linkletter on issues of academic integrity without mentioning his horrible history (see Issue 267) is exceptionally poor journalism. Like, inexcusable. I am further compelled to add that now Linkletter is concerned about cheating. He’s talking about undermining academic integrity. Now? Spare me.
More from the article:
Makers of widely used classroom software say they’ve been trying to get Google to help stop its AI from being used to cheat. “We don’t support or condone this tool or anything that leads to academic dishonesty,” says Melissa Loble, chief academic officer of Instructure, which makes the Canvas course software used by half of college students and a third of K-12 students in the United States and Canada.
That’s good. But it’s also worth keeping on the record that Instructure has, for the large part, been very hands off about cheating tools in its platform, taking an open market approach to highly questionable companies and products. Since Instructure is paid by companies to access their platform/marketplace, there’s been no incentive to look at their partners too closely. The company’s CEO has also said some pretty clueless things about cheating and AI (see Issue 266).
The story says that a “spokesman for D2L, which makes rival school software called Brightspace” also shared concerns about the Google tool and was pleased about its pause.
Anyway, the point is that Google’s “homework help” button was showing up in the closed systems in which millions of assessments are being delivered and graded.
Several educators told me they’re confused by why the button would appear on certain course websites and assessment pages but not others. Google would say only that homework help can appear when its systems determine it might be beneficial for the site you’re visiting.
*Cough*
Continuing:
the homework help button crosses the line, educators say, because it’s directed at students and normalizes using AI in a space where it doesn’t belong. Several instructors demonstrated to me how it provides AI answers during quizzes, where students might even assume it’s an officially approved part of the course. Ignoring a tool that offers easy answers can require a significant amount of self-restraint — something stressed students often lack.
And:
“It’s wild to me that the response from Google describes this as ‘supporting the learning process,’” says Brandon Cooke, a philosophy professor at Minnesota State University at Mankato.
I agree. It is wild.
And despite the errors in journalism, I’m glad the Washington Post covered this and, it seems, may have played a role in getting Google to call a timeout on this crazy, unethical, counterproductive idea.
One in 12 Students at University of New South Wales Program Caught Up in Academic Misconduct Cases
A news report from the Sydney Morning Herald (subscription required) says that at the University of New South Wales, “one in 12 students cheated last year.”
This is inaccurate on two fronts, of course. One is that a case of misconduct inquiry, or even adjudication, does not always mean a student cheated. Often, sure. But not always. Second, most cheating is not caught, let alone escalated to a formal case of inquiry. So, if the cases at New South Wales (NSW) are one in 12, the cheating rates are very likely considerably higher.
From the story:
Reports of “less serious plagiarism” and what the institution called “poor scholarship” doubled in 2024 – an increase the university attributed to the rise of artificial intelligence and better detection, its latest report on cheating says.
Two more points. I like that NSW has a category of “poor scholarship.” I think that allows for reasonable compromises and may be a good fit for when suspicion is ample but hard proof is beyond firm reach.
This paragraph also presents the enforcement paradox — schools that care enough to look for academic fraud are likely to find it. Their numbers go up. The paradox is that schools where inspection and enforcement are in place probably have less cheating than the schools at which no one cares enough to inquire. What you get, therefore, is schools such as NSW with “high” numbers of misconduct cases, while schools with comparatively low case numbers are likely experiencing far, far more cheating.
Moving ahead, the new NSW numbers come from a report, which says:
“While the university encourages the appropriate use of AI in learning, it has seen a rise in unauthorised use of generative AI in assessable work – a trend which is consistent across the education sector, in Australia and overseas”
You don’t say.
And, from the news story:
Universities across the country are working out exactly how to deal with the advent of artificial intelligence, amid reports in other countries that students are using it to cheat their way through their entire degree.
You don’t say.
To the numbers:
Last year, there were 530 cases of generative AI misuse, comprising 394 cases of less serious plagiarism and 136 incidents of serious plagiarism and exam misconduct – a 219 per cent increase on the 166 cases reported in 2023.
Two hundred. And nineteen. Percent.
You don’t say.
More:
The highest concentration of substantiated cases of plagiarism and academic misconduct in 2024 was at UNSW College, which offers pathway programs for international and domestic students. There were just over 2000 students enrolled and 172 cases of plagiarism and misconduct, representing 8.3 per cent of the headcount, equating to one in every 12 students.
Despite my words earlier, this is substantiated cases. That’s more accurate data but still misses — I’d be willing to wager — most cheating. The 172 cases reported here are the fish that were caught, not the fish in the ocean.
More numbers:
Across the university, penalties handed out for cheating included a warning with a mark reduction for about 250 students and assessment fail grades for 735. Just over 500 were failed for the course, 13 were suspended from the university and 35 were expelled permanently.
Contract cheating was slightly down, 209 cases were reported in 2024, but there was an increase in anonymous tip-offs regarding students’ use of contract cheating.
Some of the “tip-offs” mentioned here came from the cheating providers turning in their clients.
Good for NSW for investing time and money in integrity — for putting their money where their mouth is about rigor and honesty. Good for NSW as well for sharing their numbers. That’s good stuff. We need more of it.
Fortune/AP Cover AI and Cheating Too
News from the Associated Press, shared here via Fortune Magazine, adds to the high-end coverage of academic fraud that we’ve seen recently. Keep it coming, I say.
The story is a very good, very direct take on the misuse of AI tools to perpetrate academic fraud.
Here are the first two paragraphs, which I wish I could shout from any available place of reasonable altitude:
Student use of artificial intelligence has become so prevalent, high school and college educators say, that to assign writing outside of the classroom is like asking students to cheat.
“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. “Anything you send home, you have to assume is being AI’ed.”
The cheating is off the charts.
More:
Cuny’s students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him “lock down” their screens or block access to certain sites.
And:
In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading.
“I used to give a writing prompt and say, ‘In two weeks, I want a five-paragraph essay,’” says Gibson. “These days, I can’t do that. That’s almost begging teenagers to cheat.”
More:
Faculty [at Carnegie Mellon University] have been told a blanket ban on AI “is not a viable policy” unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to “flipped classrooms,” where homework is done in class.
Emily DeJeu, who teaches communication courses at Carnegie Mellon’s business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in “a lockdown browser” that blocks students from leaving the quiz screen.
There’s also this:
College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading “felt like a different language” until she read AI summaries of the texts.
“Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?”
Lily, I have no idea if that’s cheating. It may be. But I am sure it’s not learning.
Rant warning: learning is hard. It’s work. Growth requires effort. If you ask someone or something to do work for you because you struggle with it, you’re not learning to do it. If you offload the struggle of slogging through challenging reading, you’re not learning. You’re not growing. And if you’re avoiding the work of learning in school, there’s not really much point in being in school. When I see things like this, I wish schools could award degrees that say, “bachelor of arts, except for the hard parts.”
Rant over.
There’s also this, from the story:
Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences.
True. In fact, Grammarly won’t just offer to rewrite sentences, it will write the entire thing for you, and let you check it for AI — you know, for insurance, just in case someone checks to see if you used AI to do your work (see Issue 396).
Also interesting:
The University of California, Berkeley emailed all faculty new AI guidance that instructs them to “include a clear statement on their syllabus about course expectations” around AI use. The guidance offered language for three sample syllabus statements — for courses that require AI, ban AI in and out of class, or allow some AI use.
Berkeley allows teachers to ban AI use. Noted.
And:
Carnegie Mellon University has seen a huge uptick in academic responsibility violations due to AI, but often students aren’t aware they’ve done anything wrong
Maybe. That’s why clarity in advance and review throughout are important. The story also quotes a leader at Carnegie Mellon who says that faculty:
are now more hesitant to point out violations because they don’t want to accuse students unfairly.
I believe that too.
Report: 40% of US K-12 Students Say They Used AI “Without Permission”
Discovery Education recently released a very good report on student engagement — based on a solid survey of K12 students, their caregivers, teachers, and other school and system leaders.
Deep in it is this nugget:
new challenges around AI are emerging: 40% of students report using AI on assignments without permission, while 65% of teachers say they have caught students doing so.
The key phrase is “without permission.” They are an insightful two words.
Keep in mind that self-reported statistics of academic misconduct are undercounts, which means that, in reality, 40% of students are not using AI without permission, at least 40% are. That’s a floor, not the ceiling. For example, previous research has pegged actual rates of misconduct at 3x what students acknowledge, making this 40% into 120% (see Issue 276). In other words, everyone.
Either way, another data point about cheating with AI to pass along. No big deal, just four in ten K-12 students telling us they’re using AI to cheat.
Class Notes:
The Cheat Sheet earned a new paid subscriber this week. Thank you, SB.
Sharing this comment left by a reader in Issue 395, “Truly, you are a hero. This is such excellent work.” Fact check: true.
Finally, should you ever wonder where subscription donations to The Cheat Sheet go, for today’s issue alone, I subscribed to the Sydney Morning Herald, again. And bought the article from the Washington Post. I also tried to buy an article in German, which I could not do. Apparently, I need a phone number in a German format, which I could not fake. I’ll keep trying. Anyway, some of the support goes directly to getting the news and research I share here. Thought you should know. But mostly, I am a hero. Let’s not lose sight of that.