344: Chegg for Sale
Plus, PwC Israel fined for exam cheating. Plus, accounting professors discuss cheating with AI. Plus, news bites from India.
Issue 344
Subscribe below to join 4,346 (+2) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 47 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month or $80 a year, and corporate or institutional subscriptions are $240 a year. Thank you!
Chegg Puts Itself Up for Sale
Unapologetic cheating enabler Chegg released its earnings report earlier this week and I almost feel bad for them.
Almost.
Company stock is now trading at just above a dollar a share, down from $113 a share when it was helping millions of students cheat their way through remote learning.
Well, the company is still trying to help students cheat their way through learning, remote and otherwise; it’s just that their customers don’t want to pay Chegg for test and homework answers anymore when ChatGPT and others give them away for free. Anyway, Chegg has been in existential trouble for some time now, and the fat lady is in the wings.
Most of the coverage of Chegg’s earnings was about the company’s announcement that they are suing Google. More on that in a moment. But the real news to me is Chegg’s (lack of) earnings and its blunt announcement that it is for sale.
The numbers
From the report, for the fourth quarter of 2024:
Total Net Revenues of $143.5 million, a decrease of 24% year-over-year
Subscription Services Revenues of $128.5 million, a decrease of 23% year-over-year
Net Loss was $6.1 million
3.6 million Subscription Services subscribers, a decrease of 21% year-over-year
Down, down, down. Loss — $6.1 million. In Chegg’s last report, the percentage declines were in the teens (see Issue 324). Ninety days later, the declines are in the 20-plus range. Chegg is facing a brick wall while accelerating.
Because this report was for Q4, it also included annual totals for 2024:
Total Net Revenues of $617.6 million, a decrease of 14% year-over-year
Subscription Services Revenues of $549.2 million, a decrease of 14% year-over-year
Net Loss was $837.1 million
6.6 million Subscription Services subscribers, a decrease of 14% year-over-year
Chegg also provided an “outlook” for the first quarter of 2025:
Total Net Revenues in the range of $114 million to $116 million
Subscription Services Revenues in the range of $104 million to $106 million
Jumping up, you see that Chegg’s net revenue for Q4 was $143 million. They’re projecting $114 million this quarter — if they hit their projections. They think revenue from their “subscription services,” which is their cheating product, will go from $128 million to $106 million.
The sale
Faced with this reality, Chegg’s CEO said:
we are undertaking a strategic review process and exploring a range of alternatives to maximize shareholder value, including being acquired, undertaking a go-private transaction, or remaining as a public standalone company.
“Being acquired.”
Man does that feel like trying to sell The Titanic after it hit the iceberg. Future deep ocean reef for sale. Cheap. Inquire immediately.
The Google lawsuit
From the Chegg CEO in the earnings press release:
we announced the filing of a complaint against Google LLC and Alphabet Inc
Chegg is suing Google because, according to the release, as attributed to the CEO:
Google [launched] AI Overviews, or AIO, retaining traffic that historically had come to Chegg, materially impacting our acquisitions, revenue, and employees. Chegg has a superior product for education, as evident by our brand awareness, engagement, and retention. Unfortunately, traffic is being blocked from ever coming to Chegg because of Google’s AIO and their use of Chegg’s content to keep visitors on their own platform
Chegg outright blames Google for being for sale now:
These two actions are connected, as we would not need to review strategic alternatives if Google hadn’t launched AI Overviews
Sure.
According to Google AI Overviews, Google AI Overviews launched May 14, 2024. On that day, Chegg stock was trading at $4.68. In April of last year, before Google AI Overviews was a thing, Chegg’s CEO was gone and the company’s earnings were already in freefall (see Issue 291). In fact, Chegg began to report revenue drops in May of 2023 — a full year before the Google thing (see Issue 206). Back then, the company blamed ChatGPT.
My point is that I just don’t buy blaming Google. Nice try though.
The reason?
One particularly astute and plugged-in reader of The Cheat Sheet shared that it’s possible that the Google lawsuit is aimed at inflating Chegg’s potential value for a sale. A major settlement or legal win could infuse Chegg with substantial cash, making the legal case a potential asset in Chegg’s pocket at a time it desperately needs one. This reader notes that similar suits against Google for similar tactics are being reported as possible assets on the balance sheets of other companies.
Seems plausible, so I am passing it along.
This reader also notes that it appears that Chegg is not claiming that Google is stealing its content — its homework and test answers — but stealing Chegg’s internet traffic. As this reader correctly posits, it would be beyond ironic hypocrisy for Chegg to sue someone for using their IP without permission, since that’s more or less what Chegg has been living on for years — selling the answers to other people’s questions.
What Chegg really means
Even though the big news is Chegg’s admission that the party is over, I will spend just a bit more time on the Google thing because, in classic Chegg fashion, I don’t think they thought this all the way through.
For years, Chegg has indignantly insisted that it’s an education company, a kind of new way to learn, an always-on tutor of sorts. They claimed that the company did not just provide answers but that it gave valuable learning insights and step-by-step solutions. They’re still saying it. Here’s more on the Google nonsense from Chegg’s earnings release:
As we allege in our complaint, Google AIO has transformed Google from a “search engine” into an “answer engine,” displaying AI-generated content sourced from third-party sites like Chegg. Google’s expansion of AIO forces traffic to remain on Google, eliminating the need to go to third-party content source sites. The impact on Chegg’s business is clear. Our non-subscriber traffic plummeted to negative 49% in January 2025, down significantly from the modest 8% decline we reported in Q2 2024.
We believe this isn’t just about Chegg—it’s about students losing access to quality, step-by-step learning in favor of low-quality, unverified AI summaries. It’s about the digital publishing industry. It’s about the future of internet search.
There — “students losing access to quality, step-by-step learning.”
Here is what I think Chegg has not played out, however. With this argument, Chegg is saying that when people — Chegg’s potential customers — get the answer, they stop. What does this tell you about what those folks are looking for? What does this tell you about what they wanted from Chegg and what they did when they got to Chegg’s site?
Exactly.
When people (students) get the easy answer from Google, they are done. Chegg says that when Google just tells them the answer, it was “eliminating the need to go to third-party content source sites.” These are Chegg’s customers. They are answer shoppers, not hungry learners.
If the people in question actually wanted to learn, they’d continue looking, they’d ask their teachers for help, they’d make appointments at school learning centers, they would dig until they got it. But they don’t do that because they don’t want to learn — they are Google and done. Chegg is just angry now because it can’t be the one-stop answer shop anymore.
Let me rephrase — Chegg is suing Google because the people who just want fast, easy answers aren’t coming to Chegg anymore. By their own admission, that’s at least half of Chegg’s inbound customer base.
By saying that this change — Google taking away Chegg’s easy answer seekers — has forced the company to consider a sale, Chegg has told everyone what its business model was all along.
As if we didn’t know.
As I wrote in May of 2023 — yes, nearly two years ago:
When a free answer site takes away your customers, it becomes very clear very quickly what you’re actually selling.
Whatever. All I know for sure is that I am not one bit sad about what is happening at Chegg, and that these final chapters will be fun to watch.
PwC Israel Fined $2.75M for Exam Cheating
According to news coverage in Compliance Week — which may be my new favorite publication on Earth — PwC, PricewaterhouseCoopers, in Israel has agreed to pay a $2.75 million fine for failing to stop cheating on internal exams.
From the coverage:
it allegedly failed to prevent widespread cheating on training examinations despite internal warnings to staff about an ongoing crackdown.
I have my head so firmly stuck in academic misconduct that I can forget the actual consequences of cheating. For example, that it does not stop at graduation.
We are not complicated creatures. When humans are rewarded for easy shortcuts, we will keep taking easy shortcuts. As I read somewhere, no one cheats in college for the first time. Likewise, I’ll wager my meager lunch money that none of the accountants and auditors who cheated on these exams were first time cheaters.
I’ll also wager that they didn’t cheat because of stress, or unreasonable demands, or time pressure. I’ll wager that they cheated because it was easy and because they could.
Also from the coverage:
Big Four accounting firms have been no stranger to exam cheating scandals, with landmark settlements against two China-based PwC affiliates in 2023 for a $7 million total. In April 2024, KPMG Netherlands paid a record $25 million fine to the PCAOB, while Ernst & Young paid a another record penalty, $100 million to the Securities and Exchange Commission (SEC), in 2022.
These are corporate auditors, cheating exams. My friends, I worry. Honestly. I do.
On Cue: Accounting Today Covers Accounting Students and Cheating with AI
Accounting Today has a lengthy story (subscription required) on accounting students and AI cheating.
The story has quotes from several accounting professors, and I’ll attempt to primarily share those.
Sean Stein Smith, a professor at Lehman College, said:
[cheating with generative AI] “is definitely something I have heard quite a bit about from my colleagues in the humanities and other fields, but is becoming an issue for accounting/finance classes as well"
The article says Stein Smith:
has seen AI-guided cheating first-hand, especially in short-form essay assignments as well as when he requires students to perform financial analyses using specific ratios.
Continuing:
Douglas Carmichael, former chief auditor of the PCAOB and currently a professor at Baruch College where he teaches auditing, said he does not give any writing assignments that students could use generative AI to cheat on, but this doesn't mean they're not using AI to undermine the purpose of an assignment. Once students realize he has caught onto them, it has become less of an issue.
"I do ask students to submit at least one question before class on something in the text or recorded lectures they found difficult to understand or want additional information about," said Carmichael. "My experience in prior semesters was that about half of the students submitted a question that seemed suspicious to me given the language used and generality of the issue. The lack of specific reference to the topic in the text or recorded lecture was also apparent. These kinds of questions did not earn any credit and as word got out about that, use of ChatGPT is infrequent."
I added a comma in that last sentence for readability. But consider that a professor says that once he starts catching and sanctioning use of ChatGPT, it becomes less frequent.
More:
Even when students are not outright cheating, some educators have observed an unhealthy reliance on generative AI starting to form. Jack Castonguay, vice president of learning and development with Surgent as well as a Hofstra University professor who teaches advanced courses in accounting and auditing theory, has seen students struggling with understanding and communicating core concepts, due in part to their reliance on generative AI.
"We see the reliance significantly when they have to give a presentation or take an in-person exam. It's clear they have gotten to that point by using AI and can't apply the logic on their own," said Castonguay.
Castonguay continues:
“…it's a large problem now for client relationships and having conversations with this in practice. They need to look up everything and use AI as a crutch. Seminar discussions are like pulling teeth oftentimes for me."
More:
Richard C. Jones, a Hofstra University accounting professor and former technical staff member at the Financial Accounting Standards Board, said this is a major topic of debate and discussion among college faculty and administrators, noting that it seems to be brought up in nearly every meeting. It's obvious, he said, that students will use LLMs on assignments, so the challenge for faculty is to assign projects and papers that require students to actually demonstrate their knowledge versus just handing in a paper or presentation.
This one is my favorite:
Tracey Niemotko — a Marist University professor who teaches accounting and auditing as well as sustainability, taxation and forensic accounting — views AI as more of a tool than a cheating mechanism, pointing out how models can be used to expedite audit procedures or clear away the busy work that eats up the day of many professionals. Consequently, she is more sanguine about AI-guided cheating, noting that even if students do use AI in their assignments, the nature of the work makes cheating difficult.
"Even with electronic testing in the classroom, I do not see cheating as a concern overall. I think the accounting students are perhaps a bit more disciplined than most students, so I don't think they have the mindset to cheat. Even for writing assignments in my upper-level accounting courses, students may use AI to assist them, but they are required to write 'in their own words.' Overall, the majority do their own written work but may use AI as a tool to help them develop an outline or get them started," she said.
She does not see cheating “as a concern overall” because “accounting students are perhaps a bit more disciplined” and “they don’t have the mindset to cheat.”
Sure.
There’s also Abigail Zhang Parker, a University of Texas at San Antonio professor:
Her overall philosophy is that students can use generative AI to help with assignments but not on exams, as that is when they're tested on their actual understanding of the topic. So long as it is only used for assignments versus exams, she does not consider using AI to be cheating. It would be impractical to prevent the use of AI entirely anyway, she added, so it's better for educators to find ways to use it too. However, she noted that teaching students proper use of AI can itself present a challenge.
Teaching students proper use of AI can present a challenge. Interesting.
Zhang Parker thinks she may increase the portion of class grades that comes from an in-person presentation assignment, saying:
“…once students know that they will be mainly graded on their own performance, they are more incentivized to think through the problem than simply over-relying on AI."
Imagine that.
More — like I mentioned, it’s a long article:
Castonguay, from Hofstra, voiced concerns that over-reliance on AI is eroding the critical thinking and reasoning skills needed to properly evaluate these answers in the first place. He does an exercise in class where students have ChatGPT summarize a FASB ASU and review its findings. Some, he said, don't even know where to start as they have obviously been relying on ChatGPT to understand it at all.
"My bigger concern is [that] by such a reliance on AI they will lack the critical thinking and synthesizing skills that are still valued even with AI. To use a sports analogy, they are only bowling with gutter guards — what happens when those aren't there?" he said.
Good analogy.
Quick Links and News Bites from India
I’ve said before, academic cheating in India needs its own newsletter. It’s an entirely different level. Like I cannot even explain.
Usually, I can’t get to any news in India because there’s just too much, too often and I cannot keep up with what’s happening here, or in places closer to home.
Anyway - shut up and share the news. Here are some quick bites:
Two students were caught cheating using ChatGPT in CBSE board exam. The CBSE is a medical school admissions exam
A student was shot dead and two were injured over cheating in exams
Seven detained for cheating in Tripura University recruitment exam
Principal, teacher caught preparing cheating answer sheets for students
Eight students caught cheating during HSC exam - the high school graduation exam
Fourteen impostors held and nine others were caught cheating on the first day of UP exams, state-level high school graduation exams. This story says the state deployed 10,000 surveillance cameras to monitor exam rooms.
That’s just from this past week.
I’m imagining an action movie where Tom Lancaster leads the revolt from within, storms the server farm, guns obviously blazing, and single handedly destroys the company from within, just as he said he would in a flashback montage sequence that explains his apparent betrayal of the Free Integrity Army at the beginning of the film…
"Her overall philosophy is that students can use generative AI to help with assignments but not on exams, as that is when they're tested on their actual understanding of the topic. So long as it is only used for assignments versus exams, she does not consider using AI to be cheating. It would be impractical to prevent the use of AI entirely anyway, she added, so it's better for educators to find ways to use it too. However, she noted that teaching students proper use of AI can itself present a challenge."
This UTSA teacher's approach is about in line with what I suspect many elsewhere have come to understand: if uninvigilated, assume AI is used (homework). If you really want to ban it you have to have eyes (or proctoring) on the student (exam; that is assuming the exam is either in class or otherwise proctored).
It depends on the course assignment/points breakdown but from a student perspective, that sounds like a fair deal which might result in active learning and discourage straightup outsourcing to AI: You can use ChatGPT, but you still have to be able to do the (final) exam on your own.