(372) Hope, As an AI Cheating Strategy
Plus, ignore the cheating, embrace AI. Plus, Chegg stock gets another downgrade.
Issue 372
Subscribe below to join 4,739 (-4) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
New York Colleges and Universities and AI Cheating
The Albany (NY) Times Union has a story (subscription required) about area schools and misconduct with AI. It’s both rage-inducing and perfect.
It’s perfect because I believe that what the article reports is really where many schools are on this topic, which is to say blind and apathetic.
It’s rage-inducing because so many schools are blind and apathetic to the unambiguous hollowing-out of their value proposition and mission. Rage-inducing because they could care, and act. Rage-inducing because, the record will forever reflect, they simply do not want to.
Breathing. The headline of the Times Union article is:
Local colleges say AI cheating isn't rampant
This is good journalism because, foremost, it’s accurate. The paper did not report that AI cheating was not a problem or that it was not happening. They reported that local colleges “say” it’s not a big deal — more or less.
But it’s also very misleading because the colleges did not say that cheating was not rampant. They said their cases of AI-related misconduct were not growing — or even a thing at all. That’s different.
The first paragraph of the story is:
Despite national reports of rampant cheating on schoolwork with AI, local colleges say they have reprimanded very few students.
This is the good reporting again because the two things here are not the same. There are, have been, national reports on rampant cheating. And local colleges have reprimanded few. If we believe the national reports — which we should because they are true — then one (cheating) is a condition. The other (not reprimanding) is a policy decision.
Let’s take 30 seconds to be clear about this. No one in this story, or really elsewhere, is claiming that “rampant cheating” with AI is not happening. New York area colleges and universities, according to this reporting, have decided not to do anything about it.
From the story:
At Russell Sage last year, only one student faced an “academic integrity” charge for misusing AI — down from six students in 2022-2023. That’s out of more than 2,000 students at the college.
Russell Sage is a private college.
This statistic — one student facing an integrity charge for misusing AI — is disqualifying on the matter of degree integrity. I have no idea what the number should be, but one is not credible. We respect what we enforce. And vice versa. Russell Sage may as well hang out a sign reading: “Russell Sage — We Don’t Care.”
Continuing:
“We do not allow the use of AI detectors for the academic integrity process. Those detectors have been shown to be less than 100% reliable, and we do not hold students accountable without proof,” said Russell Sage Dean of Undergraduate Studies Andrea Rehn.
First, embarrassing.
Second, this probably explains why the school had one (one!) case of suspected AI-related misconduct.
Third, the standard is 100%? Nothing is 100% reliable. Nothing. I look forward to the Dean disconnecting the school’s fire alarms and smoke detectors. Or turning off the school’s air conditioning. It could break, you know? It’s not 100% reliable. So, off it goes. I’m sure she walks to work because — cars are not 100% reliable. I want to ask the Dean to sign a letter urging the Department of Homeland Security to not use bag scanners at airports because — you know.
This is not an intellectually defensible position. And I am sorry that I do not have a better word than embarrassing.
Russell Sage is not alone, however:
The University at Albany has not seen any increase in cheating cases since AI became readily available, spokeswoman Amy Geduldig said.
Once again — important work by the journalist to cite “cheating cases” and not “cheating.” And to repeat, this does not mean that cheating, even cheating with AI, is not significant or significantly elevated at the school. It means only that the school has not pursued more incidents of misconduct than it did in the past — whatever rate that was.
There’s also:
At Siena College, many professors handle AI cheating without reporting the student to formal discipline. But the school has seen “an uptick” in formal cases related to AI, provost Margaret Madden said. She declined to give specific numbers.
Teachers handling cheating, or suspected cheating, on their own is one of the reasons why the number of integrity cases is a poor proxy for number of integrity violations.
It’s also why schools should have a mandatory report policy for suspected integrity breaches. Educators can, and in many cases should, directly address these, but administrators should know when this is happening. For tracking, for consistency, for a host of reasons. Accreditors should insist on mandatory reporting of integrity actions at all levels. Accreditors should also insist that this data be public. Declining to give numbers is never acceptable.
If integrity, consistency, and transparency are linked, declining to share information is — pause — unhelpful. I’m trying to be nice.
Moving on, from the article:
Some professors say they worry they won’t detect which essays — written by students they just met — are fake. Commonly used software designed to catch plagiarism sometimes mistakenly identifies students who have written their own essays.
They probably won’t detect the fake essays — not without help (see Issue 352 for one example).
And it is true that AI detection software “sometimes mistakenly identifies students who have written their own essays.” Although this is incredibly rare — exponentially less common than an airport bag screener mistakenly alerting an agent to a potentially dangerous article in your carry-on.
Even though the technology is shockingly more accurate than the human screening, some voices in education still insist that we not use it, leaving detection likelihood near zero. Which is how you get one case of AI-related misconduct. One!
The article also includes that professors:
say they are reluctantly embracing AI because they can’t fight it. At Russell Sage, about half the professors let students use ChatGPT to produce drafts or revise their work if they cite it specifically.
They can fight it. They have decided not to. Surrender is a more fitting word than embrace, in my view.
And continuing the analogy that ChatGPT is the steroids of academic work, a cheap and no-effort way to the desired outcome of glory (see Issue 278), consider those sentences rewritten as: “coaches say they are reluctantly embracing steroids because they can’t fight it. At Russell Sage, about half the athletic coaches let students use steroids in training or to improve their game-day performance, so long as they cite it specifically.”
The story quotes “UAlbany communications lecturer Lauren Bryant:”
“On the one hand, I wanted to avoid it at all costs. On the other hand, realizing, well, this technology is not going anywhere, right? How can I embrace it? How can I get ahead of it, if that’s even possible? And then how can I use it to help my students, knowing that they’re likely going to be using it?” she said.
Steroids.
And:
In her class, students read about the theory of communication. Each week, they write a journal entry applying that week’s theory to their personal life. So, Bryant asked them to analyze three fictionalized journal entries — one of which was secretly written by AI. They had to decide which was best.
Many students pick the AI essay. They argue it’s the most professional-sounding writing.
And:
The first time she ran that activity, she was shocked that so many students preferred the AI essay.
And:
She doesn’t think the assignment completely eliminates cheating. To combat cheating, she also requires students to add class details to their journal entries, including the names of their group members and quotes from their in-class discussions.
I respect her inventiveness, but none of these things will combat, or even mitigate, AI misconduct. These interventions are absurdly easy for even the most elementary AI user to circumvent.
A different UAlbany professor offered that:
Still, not having to write a structured essay is a “pedagogical loss,” he said.
Back to professor Bryant who ends the article:
Still, she hopes that they learn to not “blindly accept” whatever AI generates, and perhaps embrace the fact that writing helps them to think more clearly.
Hope and perhaps. Sound strategy. It’s a good thing nothing is riding on the outcomes.
Opinion: College is Broken, Embrace AI
The Cincinnati Enquirer ran an opinion piece recently from a professor of written communication at Miami University, in Ohio. And, it’s a tad confusing.
The headline is:
Students aren't cheating because they have AI, but because colleges are broken
Right off the bat, I can tell this is a misfire. Cheating is not because of AI. I don’t think anyone thinks that. Cheating existed before AI. But AI has made cheating easier, more accepted, and less risky. Consequently, it’s made cheating more common. It has increased cheating, not caused it.
Likewise, if students are cheating “because colleges are broken,” then colleges have always been broken. But this can’t be true either, since cheating is common in nearly every non-college setting, from professional exams to bass fishing (see Issue 156).
It’s a shame, because what the writer partly argues for — increased funding for higher education — is right and good and important.
I suppose that with more funding, several schools would suddenly get over their objections to AI detection and decide that the services are worth the cost. I hope they’d reduce class sizes, allowing for teachers to know their students better. Funding may also let schools hire more full-time, tenured faculty rather than turning to contract instruction all the time. All three would reduce cheating.
I give some slack on this because I have the sense that the headline, and subheadline, are perhaps what the editorial board wanted to say, not the writer. For example, the subhead is:
The solution to students using AI to cheat is not going back to handwritten essays and speeches. The first thing we must do is increase funding, so we can reimagine colleges and universities.
But that’s not in the article. It’s also untrue. Going back to handwritten essays and oral exams would, in fact, reduce using AI to cheat. While I am not sure that a “reimagine” of college — whatever that means — will do anything at all about cheating.
The piece itself starts:
In recent weeks, a plethora of news and opinion articles have warned that college students are cheating en masse by using artificial intelligence to write their papers, and that higher education is overall in peril. While the details of these arguments vary, many of the articles and the people who comment on them seem to converge on the same solution: Let's return to the Old Days of essays and declamations, when students wrote in blue books and took oral exams.
As a writing scholar, I'd like to encourage us all to take a deep breath. Alarms about the illiteracy of American youth and the end of writing as we know it are as old as writing itself, and resurface every time a new tool emerges.
Fair. Though I think we have a bait and switch here. The concern is not the “end of writing.” The concern is the end of the college degree as a valid proxy for competency. Writing, as they say, is collateral damage.
The author goes on to review frequent alarms and panics about writing and literacy. Which, again, fair. But to me, not the core issue cheating presents.
She writes:
Our current challenge is not a new one. Students are what they have always been: learners. And writing continues to be what it has always been: hard. There is no going back to a golden moment when everyone wrote eloquently and students never cheated. Thus, the solution is not to embrace blue book essays, as so many columns advise.
If the goal is better writing, I agree — blue books are not, as a solution, going to solve much. But if the goal is fraud prevention, blue books will do that. At least for some kinds of common cheating. She’s selling a critique related to cheating, but only addressing writing.
Continuing:
For one thing, our classes are too big. If we really want students writing by hand and giving speeches with feedback and assessment by expert faculty, the first thing we must do is fund public higher education again with tax dollars (we clearly can't raise tuition any further).
I agree about big classes being a significant impediment to integrity. And funding will help. Or at least it should.
But I note that our writer absolutely does not say that blue books and oral exams will not reduce cheating. She says it won’t improve writing and that it is impractical, given funding limits.
Further:
Unless we change those teaching and funding conditions, there are simply not enough faculty and too many students in our classrooms for blue books and declamations to be a practical solution.
I agree. But this is, once again, not an argument about cheating.
The author also posits that blue books are ill-suited for assessments in today’s specialized workforce. Sure. Not about cheating.
She continues:
The role of education at this moment is to create broadly literate students. Graduates need AI and technology literacy (among many other kinds of literacy) to understand how things work and why they work that way, and what the consequences are of inventing and adopting new tools (like AI). Providing this kind of education requires rethinking higher education altogether.
I find this argument odd. But also not about cheating.
When she does get to cheating, she writes:
Students are not cheating because of AI. When they are cheating, it is because of the many ways that education is no longer working as it should. But students using AI to cheat have perhaps hastened a reckoning that has been a long time coming for higher ed.
Widespread cheating to get good grades, as one recent article alleges, is happening, and is the logical consequence of turning college into a factory that churns out workers.
We talked about how students are not cheating because of AI. That’s a straw man.
And she does say, directly that “widespread cheating” is happening.
At the same time, you cannot realistically argue that AI cheating has somehow unearthed a fatal flaw in education, that it is “no longer working as it should.” If that’s true, what do we say about documented cheating rates in college of 50%-70% before AI? If massive cheating means college is not working, it has not worked for a long, long time. And it has nothing to do with AI.
Near the end, she goes on to say that she helps faculty “explore what AI is” and “then consider how to integrate it into their courses.”
When you cobble it together, you get that returning to blue books and handwritten work is not helpful for writing — she does not address their utility for addressing misconduct. And it can’t be done anyway because of money. But we should explore AI and integrate it into teaching and learning.
I’ve heard this before. It feels like a message in need of a reason to be delivered. “Colleges and universities need more of this kind of faculty development,” she writes. Sure. All that is missing is: “this message provided by OpenAI.”
Also:
The days when school was about regurgitating to prove we memorized something are over. Information is readily available; we don't need to be able to memorize it. However, we do need to be able to assess it, think critically about it, and apply it. The education of tomorrow is about application and innovation. Those of us who study teaching, learning, and writing can chart a new path forward.
The old ways are over. Education of tomorrow is application and innovation. And don’t worry about the cheating. Illogically, the problem is actually college itself. Or something.
Let me say with respect, I disagree.
Chegg Stock Tagged as “Reduce,” While an Insider Sells
Two quick news blips about our favorite cheating provider, Chegg.
This article says that a recent round of analyst projections has pegged the stock as “reduce,” which means to sell at least part of your Chegg stock — to reduce your holdings in Chegg. From the article:
Chegg, Inc. has been assigned an average recommendation of “Reduce” from the seven brokerages that are covering the company, MarketBeat Ratings reports. Three analysts have rated the stock with a sell recommendation and four have issued a hold recommendation on the company.
A “sell” rating is worse than a “reduce” rating. Sell means dump it. At best, market observers are advising not to buy more. Yet, some people are still buying, for reasons that are, at best, odd. From the story:
Vanguard Group Inc. grew its holdings in shares of Chegg by 1.9% in the fourth quarter. Vanguard Group Inc. now owns 9,765,268 shares of the technology company’s stock valued at $15,722,000 after acquiring an additional 180,258 shares in the last quarter. Acadian Asset Management LLC grew its holdings in shares of Chegg by 11.2% in the first quarter. Acadian Asset Management LLC now owns 4,175,681 shares of the technology company’s stock valued at $2,664,000 after acquiring an additional 421,796 shares in the last quarter. AQR Capital Management LLC grew its holdings in shares of Chegg by 95.9% in the first quarter. AQR Capital Management LLC now owns 3,011,879 shares of the technology company’s stock valued at $1,925,000 after acquiring an additional 1,474,640 shares in the last quarter.
And so on. There were others.
Millions bet on an illicit cheating service while it circles the drain. Smart.
Although, maybe they know something we don’t — which lawyers and regulators would have to ask. Maybe Chegg is about to be gobbled up. If courts rule that the legal ownership rights of data used to train AI does not matter, that using material to train AI is “fair use,” then Chegg has value again.
We will see.
The second article is that an insider, a Board Member at Chegg, has sold a block of shares:
On June 6, 2025, Renee Budig, a Director at Chegg Inc sold 27,973 shares of the company. Following this transaction, the insider now owns 85,742 shares of Chegg Inc.
If a buyer were in the wings, selling now would be odd. So, no idea.
Vanguard is just a robo-buyer in this case, I think. They are automatically buying and selling Chegg shares to balance out funds that track some index.