(353) Chegg Receives Stock Delisting Warning, Unveils New and Terrible Product
Plus, an essay in Inside Higher Ed, deconstructed.
Issue 353
Subscribe below to join 4,595 (+45) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
Chegg Receives Delisting Warning from New York Stock Exchange
Stock in cheating provider Chegg has slid all the way into the four-bits range, trading below fifty cents a share this week.
The low share price triggered a warning from the New York Stock Exchange, which requires companies on the NYSE to maintain a share value above a single dollar. Technically, it’s a notice of non-compliance. And if Chegg does not get its share value up, it can be taken off the exchange — a delisting. Chegg has six months to fix the issue.
Chegg said it:
intends to notify the NYSE timely of its intent to regain compliance with the NYSE minimum share price requirement, which may include, if necessary, effecting a reverse stock split
Recently, stock markets and regulators have been moving to limit the use of reverse stock splits to avoid delisting. We will see.
Chegg Launches A New Tool That For Sure Won’t Help Students Cheat More
Chegg, desperate to return to relevance and boost its share price, launched a new tool this week. True to form from Chegg, it will make cheating easier and reduce the odds of being caught using AI.
The tool is called Solution Scout and the name alone should tell you what Chegg wants to sell — solutions, not learning. Answers, not effort. They did not name it Learning Scout.
Basically, Chegg’s new gizmo will let students put in a question and compare the answers from multiple AI sources, plus Chegg’s own library of answers. The release says it’s:
a powerful new tool designed to help students efficiently compare solutions from multiple sources
Efficiently compare solutions. And:
the tool highlights key differences in AI-generated summaries
So, in other words, students can now ask Chegg and other AI sources if another AI is right, or accurate — all in one place. How very convenient to be able to double-check the answers AI gives you.
Chegg outright says that’s what this tool is for:
Chegg understands the evolving needs of students and has observed a critical pain point when it comes to using AI to support learning. Verifying the accuracy of solutions found online – especially from AI-generated sources – is often necessary yet time-consuming. Relying on a single source can be risky, and when students have doubts, they spend their valuable study time finding and comparing solutions across different sources.
Wow.
Don’t actually learn — just let us verify the accuracy of solutions you find online. Because, you know, nothing is worse than doing no work whatsoever and being wrong. Here comes Chegg to save the day. You can still do no work but feel better that your answer is probably right. Whew. And I mean, why would you want to go to the trouble of comparing solutions — what a waste of time.
Unbelievable.
There’s more:
Solution Scout enables easy comparison of solutions from different foundational LLM models, including ChatGPT, Google Gemini, and Claude alongside Chegg’s solutions. The tool automatically highlights discrepancies and areas of agreement, providing students with an AI-generated summary that distills key differences and points of consensus – eliminating the challenge of making apples-to-oranges comparisons between different sources. This innovation transforms what was once a fragmented, time-consuming process into a seamless learning experience, helping students gain confidence in the accuracy and reliability of the solutions they are seeking.
This AI thing from Google that Chegg is now pulling into its own tool, this is a reminder that Chegg just sued Google over its AI summaries (see Issue 344) The claim was that people used to come to Chegg for fast, easy answers, and now they can get that from Google AI.
But again — thank goodness that Chegg is “eliminating the challenge of making apples-to-oranges comparisons between different sources.” I mean there’s no way we’d want students to have to analyze and compare information. The horror. I can indeed see why Chegg thinks that’s “a critical pain point” for students.
Think I’m kidding? Here, more:
Solution Scout helps students compare solutions fast and reduce guesswork on assignments so they can spend less time verifying the information and more time learning.
Why verify information? Get your solutions fast, with less guesswork.
In the release, Nathan Schultz, President and CEO of Chegg, says that Scout will allow students:
to quickly assess, understand nuances, and arrive at a solution with ease of mind – so they can focus on learning and not scouring different AI tools for help
Quickly arrive at a solution. And I cannot be the only one who thinks that, if a student is “scouring different AI tools for help,” they may not be learning in the first place. But more on point, why bother to scour? Just “arrive at a solution with ease of mind.”
But it’s the accuracy thing that’s the more problematic portion.
As Chegg knows, as everyone knows, AI makes things up. Regularly. Therefore, one of the ways teachers were spotting AI use was through those factual inaccuracies. Students who were using AI for shortcuts did not check, more often than not.
But now, Chegg has made the probability of using a wrong AI answer in your work — and therefore getting caught — lower. They will now use AI to spot “discrepancies” in AI answers. From Chegg:
helping students gain confidence in the accuracy and reliability of the solutions they are seeking.
Once again, it’s solutions the students are seeking. And now you can be confident in the accuracy of those easy, AI-generated answers.
Oh, Chegg, don’t ever change.
IHE Column on AI Detection, Trust
A few days ago, a columnist at Inside Higher Ed wrote about AI detection, claiming in the headline:
Apparently, I’m a bot.
Of course it was in Inside Higher Ed (see Issue 78 or Issue 167) which just can’t help but weaken conversations and actions aimed at preserving academic integrity.
In this example, the point is to convince readers that AI detection is unreliable, and that the core issue is not cheating, but trust.
His essay has two parts.
On Trust
The first part is about trust in academia and learning. And how AI use, but mostly AI detection, corrupt that trust.
While trust in education settings is ideal, it requires mutual investment in the process, in the work of teaching and learning. And in an era — in this era — where AI use in student work is nearing ubiquity, that investment is profoundly lacking. In fact, the primary selling point of AI is avoiding work. Which means that it’s possible and reasonable to lament the loss of trust in education. But if trust is the life preserver you’re clutching as the reason not to be active in defense of integrity, fairness, and academic achievement, you may have already drowned.
In any case, Steven Mintz writes:
I ran the opening paragraphs of one of my recent “Higher Ed Gamma” posts through an AI detector. In that post, I described an undergraduate musical theater concert—a performance witnessed only by me and about 50 parents and classmates and never formally reviewed. Yet, the detector labeled the text as 100 percent AI-generated, leaving me both shocked and bewildered.
Later, Mintz identifies the detector he used — ZeroGPT. It’s not great, routinely testing among the least reliable products on the market (see Issue 256, Issue 288, or Issue 250). In Issue 256, for example, I quoted a research paper finding that:
By contrast, the AI detector ZeroGPT identified AI-written introductions with an accuracy of only about 35–65%
In Issue 288, ZeroGPT scored just 46% accurate at picking up text as either AI or human. It did test pretty well in another setting (see Issue 264), but overall, it’s been the definition of unreliable.
I’m not sure why Mintz — or anyone — is using it.
Also, Mintz says he tested "the opening paragraphs” of his work, which is another problem since most good AI detectors need several hundred words to develop confident analysis. In other words, not a ton of data, fed into a substandard tool spits out bad information. I’m not shocked.
More on this later, as this detection thing comprises the second stanza of the essay.
Moving on, the piece describes our current condition as:
A Crisis of Trust
I think it’s a crisis of laziness. But OK. He uses “trust” or “distrust” 14 times. His point, for the most part, is:
When AI-generated content becomes increasingly sophisticated and accessible, it blurs the line between authentic student effort and machine assistance. This uncertainty has fostered an atmosphere of suspicion rather than support.
Students, feeling constantly scrutinized, become defensive or disengaged, turning the educational process into a game of cat and mouse rather than a genuine learning experience.
No. The accessibility and sophistication of AI tools does not blur anything. Using them does. Using AI tools in academic work, especially where not allowed, is the problem. Amazing how the student agency in this situation is simply skipped. But we do get result — the students feel defensive and disengaged. In this telling, the students are blameless victims of suspicion. Whereas, I think, if you don’t want to be in a game of cat and mouse, do not take the cheese.
I mean, I know that Mintz knows where the agency and initial transgressions lie because he ends his essay with this:
What is wrong is misrepresenting authorship—passing off AI-generated content or ideas as entirely your own. The line isn’t about whether you use AI; it’s about honesty, transparency and ownership.
Yes. Using these tools to misrepresent work is wrong. He knows. Yet, in his telling, the trust breach comes from the suspicion.
In the first section of the essay is also this:
This adversarial dynamic harms both parties. Faculty members consciously or unconsciously begin to see their job as policing academic integrity rather than enriching the learning environment. Meanwhile, students experience heightened anxiety, reduced creativity and a reluctance to experiment with their writing—fearing that any assistance might be misconstrued as cheating.
Policing academic integrity is the job of faculty. It is. Full sentence.
It says a great deal, in my view, that Mintz thinks it may not be, should not be, or that it may be an issue if it is. Tell me please, if faculty do not police integrity, who will? Who can?
Using “rather than” is pretty telling too. As if policing integrity won’t allow for an enriching learning environment.
Here’s one — I think you cannot possibly provide an “enriching learning environment” if you are not “policing academic integrity.” Allowing and rewarding cheating is not enriching to anyone. Not ever. Except Chegg. And OpenAI. But that’s the literal kind of enrichment.
Mintz continues:
To address this challenge, institutions must invest in robust, transparent policies and foster an environment where both students and faculty can embrace technological advances without compromising the integrity of the academic process.
Agreed. Easy to say, hard to do. Moreover, not “compromising the integrity of the academic process” without policing misconduct is an incomprehensible fantasy.
I do agree completely with Mintz here:
The essential problem is that faculty-student interactions are scant.
Fair.
The Detection
Mintz writes:
according to ZeroGPT, one AI-detection platform, 27.5 percent of a January 2019 piece titled “The Sociology of Today’s Classroom” was deemed likely to contain AI-generated text. Even more baffling, a post on “The Death of Comedy,” published eight days before ChatGPT’s release, scored 30.58 percent.
That is baffling. You probably need a better AI detector. That one is junk, mate.
He continues:
According to ZeroGPT’s AI detector, George Orwell’s classic 1936 essay “Shooting an Elephant” is mostly AI-generated—earning a suspicious 53.97 percent score. Apparently, Orwell was ahead of his time, cranking out reflections on colonial guilt with the help of a large language model in pre–World War II Burma.
But even that pales in comparison to Abraham Lincoln, whose Gettysburg Address registers as 100 percent AI-generated. Who knew ChatGPT had time-traveled to 1863 to help draft one of the most iconic speeches in American history?
Good grief.
I am so tired of this canard as a stand-in for actual thinking or understanding. I’ll use Mintz’s words from a bit further in this essay to explain. Mintz writes:
AI detectors flag text based on patterns—not provenance. They look for surface features often associated with machine-generated prose: highly formal, grammatically polished language, repetitive sentence structures, abstract or academic phrasing and the use of passive voice.
They also penalize content that resembles common online topics—especially those frequently used to train large language models.
Yes. AI detectors look for patterns — like the amazingly predictable text patterns in The Gettysburg Address. Or in a classic essay from a very well-known author, one that’s been quoted and written about repeatedly.
Easily predictable text, especially about common online topics, is used to train the models, therefore, it’s what the AI would say, therefore, it’s the most predictable thing. So, when an AI detector flags it, it’s correct — not that it’s composed by AI, but that’s it’s so obvious and so predictable that a bot could have spit it out.
Let me try this. If you’re writing “we deem these truths to be,” and your next word is “self-evident,” the AI detector is going to ping because that’s the most predictable next word in the universe. It does not mean it thinks that AI wrote the Declaration of Independence; it means that the text in question has highly predictable word choices.
Mintz knows this — the words about flagging text based on patterns are his. He also writes:
They aren’t detecting academic dishonesty. They’re just scanning for patterns. They don’t know whether a paragraph came from a chat bot, a college sophomore or a Pulitzer Prize winner.
Correct — AI detectors do not detect dishonesty. They scan for patterns, indicators that may initiate deeper inquiry. That’s all they do. They do not accuse, or damage, or anything of the sort.
And:
But these markers don’t prove anything about authorship. They merely suggest that a piece of writing looks like something an AI text generator might have written. That’s not detection—it’s guesswork dressed up as statistical certainty.
Exactly on both fronts. AI detectors detect when text looks like something AI might have written. To beat this horse, something very predictable and formulaic — like The Gettysburg Address.
And, I have to ask, what does Mintz think detection means?
Detection systems are guesswork — educated guesswork based on probability. A metal detector does not know if you found a silver coin. It pings that something at your feet has the signature of silver. You have to dig it up. A mammogram does not know if an anomaly is malignant. That’s what the biopsy tells you. A fire alarm does not signal a fire. It signals that the conditions it’s monitoring may be a fire. You have to investigate. Like literally every other detection system, AI detection signals possible targets — you have to dig it up and look at it closely.
This is not complicated.
This whole ‘the AI detection is guessing’ narrative is, ironically, just too fabricated for me.
Then there’s also that this whole thing is constructed using ZeroGPT, which we already should have known was unreliable. But about which Mintz tell us:
For a fee, ZeroGPT promises to “humanize” your writing and render it undetectable.
Of course.
You mean ZeroGPT may be over-flagging human text as AI in order to sell you their “humanize” tool? Say it ain’t so. And ZeroGPT sells this product specifically to beat AI detection — the tools that he says are just guessing anyway? Hmm. That’s odd.
Maybe it’s worth asking whether ZeroGPT is a reliable player in this arena. Reliable enough, say, to be the foundation of an essay about trust.
Where I agree with Mintz again is when his essay gets into this:
We have entered a strange moment when fluency itself raises suspicion. The clearer, more structured and more coherent the writing, the more likely it is to be labeled artificial. The very qualities we’ve long celebrated as signs of strong writing—clarity, concision, logic, polish—are now treated as liabilities by algorithms trained to detect what “looks” like AI.
In an unexpected twist, the more clearly one writes, the more likely one is to be doubted.
And:
In fact, these tools often end up punishing precisely the kind of writing we should value: clear, structured, thoughtful prose.
I mean, I don’t think the “tools” are “punishing.” That’s another telling subject/verb choice telling us that Mintz thinks the tools are the real problem.
But yes. Good writing can be seen as too good to be human. The real irony, and real danger, is that it’s not the machines, it’s the humans making those bad judgements (see Issue 352). And it’s the humans who can actually do some punishing.
That’s why students, to cover their tracks when they inappropriately use AI, insert incorrect grammar and spelling errors into their work. They make it look sloppy and unpolished because they know that AI usually won’t have those mistakes. They’d rather be thought of as poor writers and bad students than get caught not doing the work.
Anyway, Mintz is right that we’re moving to a dynamic in which quality written work that takes time and effort will be suspicious. No one can think that’s good.
But in my view, blaming the machines seems misplaced, especially the ones that detect AI. The agency is with the initial user — if, when, and how to use AI when writing, and what those users disclose about it. I do not agree that:
When AI detectors wrongly flag classic essays, student work or carefully edited prose as suspicious, they don’t just make mistakes—they chip away at trust. In classrooms, teachers begin to doubt their best writers.
It’s not the supposed inaccuracy of the detection that’s destroying trust. People are doing that.
I spent way too long on this. Sorry, not sorry.
I very much appreciate the work you do. It's extremely informative and helps represent important concerns. I agree with you most of the time, but I don't share your negative view of GPTZero. It is designed to err on the side of false negatives instead of false positives, which does mean that it often misses AI work and categorizes it as human. On the other hand, I appreciate the level of caution from GPTZero because I am more concerned about false positives than false negatives in practice. I can't speak to the specific instance cited in the Inside Higher Ed article, but I can say that I ran 100 old 2019-2019 (human) papers through GPTZero and the highest AI score I got was very low (well under 20%). My colleague ran 600 old papers through it and still hasn't gotten one score over 10% AI likelihood. What this means is that if I use a high threshold for detector results on a decent-sized student writing sample, I can be very confident that if a score is high, there is certainly some legitimate reason to assume that the student didn't do the work themselves. This gives me something objective as a starting point for holding students accountable. That said, I am completely open to learning more about the pros and cons of different detectors and tips on using them effectively. This is a topic that really hasn't received enough attention. I didn't know that GPTZero had introduced a "humanize" function. That's awful.
The other point where I maybe differ from your perspective is that I really view students as pawns in a big game of financial speculation. I think we agree that companies like Chegg and Big Tech companies pushing AI on students are exploiting natural human tendencies (like the predictable tendency for students to procrastinate, then feel desperate) to fuel investment and try to gain market share.
As I wrote in a recent comment on another blog, "From a business point of view, what we have are AI companies building user numbers by tempting students away from the work of studying for grades, then using these growth statistics to convince businesses that they need to keep up with the users of the future (students), then trying to convince our institutions that we need to make our students AI-ready for future jobs."
I hope that the lawsuit against Chegg is successful because it will help set a legal precedent.
Hey Derek, did you see that OpenAI is making ChatGPT Plus free for students through May? I mean, that timeline is going to wreak havoc on colleges and universities as this semester concludes.