Chegg Collected $766.9 Million in 2022, But Profit Shrinks
Plus, Chegg stock slides again. Plus, a stock hype man says Chegg is "perfectly ethical." Plus, more bad media coverage.
Issue 189
To join the 2,945 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below:
If you enjoy “The Cheat Sheet,” please consider joining the 11 amazing people who are chipping in a few bucks a month via Patreon. And to those who are, thank you!
Chegg Reports $205 Million in Revenue in Q4 of 2022, Net Income Plummets
Chegg, the cheating provider, issued its earnings report last week and announced its Q4 2022 revenue was $205.2 million with an annual income of $766.9 million for 2022.
Net income, the company said, was $266 million for the year but - shockingly - only $1.9 million for Q4 - if I am reading that right.
As of the close of last week, Chegg stock dropped another 25%.
This isn’t a new story (see Issue 68 or Issue 162). The stock has gone way down, then up again, now back down, though this drop feels different. I think, at long last, investors may be worried about a business model that caters to academic fraud.
Or more realistically, they may just be worried about bad business, because in Chegg’s outlook for 2023, they’re projecting further declines in revenue and earnings.
For example, Chegg reported pre-tax earnings of $74 million in Q4 of this year but projected pre-tax earnings for Q1 of next year at only “$53 to $55 million.” Though Chegg thinks their earnings will be essentially unchanged for 2023 vs 2022. I think they’re dreaming - telling investors that more than one third of their annual income will come in the last quarter of next year. Sure.
Of further note to those who care about academic integrity, Chegg reported it has eight million subscribers. For context, there are fewer than 16 million college students in the United States.
And of special notice is that, as mentioned previously in Issue 162, Chegg is now nearly entirely a cheating provider - selling answers on demand. This is the company CFO during their most recent earnings report:
As we enter 2023, we will no longer have significant revenue from Required Materials. Required Materials is now 100% revenue share based and we expect it to represent less than $5 million of revenue for the full year. Therefore, we are changing the way we report revenue to better represent what Chegg is now, predominantly a large subscription service with several other smaller product offerings, that while important, have yet to reach scale.
Required Materials is textbook rentals, which used to be Chegg’s business. Chegg is now “predominantly a large subscription service.” It is. Students pay subscription fees to get answers to assignments and tests.
Most of the business press, in reporting on Chegg’s declining fortunes, noted the threat posed by ChatGTP and other text-creation software. Chegg, they said, downplayed the threat posed by students getting ChatGTP answers for free instead of paying Chegg for them.
Here, I think Chegg is right and wrong. It’s a mistake to get answers from ChatGTP, even for free, since they’re likely to be incorrect or simply made up. Answers from Chegg are probably better. But I don’t think it matters - most students likely don’t know or don’t care about the difference and I can see a free answer service cutting into Chegg deeply.
In fact, a reader and well-connected friend of The Cheat Sheet e-mailed more than three weeks ago to say he thought ChatGTP was going to really hurt Chegg. He may have been right.
Chegg, even hobbled, remains a serious and persistent threat to teaching and learning. Raking in hundreds of millions of dollars a quarter, it’s not going anywhere. Not soon enough anyway. And it would be unwise to simply wait for Chegg to fade away.
Stock Trader: Chegg is “Ethical, Irreplicable, And Extremely Undervalued”
No, I have no idea what “irreplicable” is - but the headline of this story at the stock-hawking site Seeking Alpha says Chegg is that.
Sent in by an eagle-eyed reader, the story is quite funny.
Leading off, if you have to say the company you’re investing in is “ethical,” you’re probably giving the game away. What did Shakespeare said about protesting too much?
Anyway, our stock guru says that with Chegg:
if a student needs help with a problem, they can screenshot it or simply ask the Chegg community for help. In response, professionals will respond with the answers and explain how they arrived at the solution.
True. That’s what Chegg does. Students can take photos of their questions and pay Chegg for the answers.
He continues:
For some, this is considered cheating, because students can take their homework, post it online, receive the correct answer and then turn the assignment in for full credit.
Dude, yes. Paying for answers and then turning the assignment in for full credit is cheating. Not “for some.” Like, for everyone. That’s the actual definition of cheating.
Our stock pumper also gives us this absolute gem:
Some education professionals label this service as "cheating". However, I think these professionals need to look in their mirror and ask themselves:
"Why would students see a reason to resort to such a service? Are my teachings not good enough?"
The dude who wrote “irreplicable” in his headline asked teachers to consider if their “teachings” are good enough. Honestly, I think this person is perfect to defend Chegg. They should put him on payroll.
But wait. There’s more. He continues:
This leads to my argument that Chegg is perfectly ethical. In the current field of higher education, it doesn't really matter how someone masters the information, so long as they can perform well on tests. In my experience, current syllabi are weighted on 80%-100% from tests and exams. If one student chooses to abuse Chegg, then rest assured, he or she will not do well on the tests. Also, if a student believes resorting to Chegg is the only option, I believe this is a problem with the professor, rather than the student.
Beyond blaming teachers when their students cheat, our author misses that students can - and do - use Chegg on tests. All the time.
The writer’s bio says he’s “a full-time mechanical engineer and part-time MBA student.” I am awfully curious where, and not only because he says he uses Chegg personally. Though, he says he does it “in ethical fashion.” Sure.
Anyway, this particularly insightful stock trader wants you to know that Chegg is ethical and a great value. I’m sure you’ll rush right out and buy.
Bad Reporting on the Race to Catch Cheating with ChatGPT
There have been probably hundreds of articles about ChatGPT and academic integrity. Most are bland, fair, predictable. But occasionally, one stands out.
This one from Times Higher Ed, does. It’s really bad.
It kind of pains me because Times Higher Ed (THE) has been unquestionably the best outlet in the world for covering academic misconduct. They do it often and usually well. This was an exception.
Anyway, this piece is more about the race to develop and get to market with a tool that can detect ChatGPT, giving teachers and schools insight into what may be an attempt to cheat. Nothing wrong with writing about that. I did (see Issue 184).
Unfortunately, the piece starts with one of those “wrongly accused of plagiarism” stories. According to THE reporting, the:
undergraduate had been horrified to find that her university had flagged the essay as cheating because 30 per cent of it matched with other sources, according to the ubiquitous plagiarism detection software developed by edtech giant Turnitin.
This triggered an accusation of plagiarism. Fair enough. Although, the reporting continues:
the problem was that Turnitin’s database contained thousands of such essays and the student’s teacher had “blindly” followed the plagiarism score and initiated disciplinary procedures. On this occasion, the mistake was easy to rectify by comparing the essay to the work that was said to have been plagiarised and judging whether the accusation was warranted.
The mistake was that the instructor did not review the passages that triggered the 30% match. So, the mistake was human, not technological. The technology worked. The person using it did so incorrectly. Why that example starts a story about technology detection - which worked in this example - I’m not sure. But even so, the reporting says “the mistake was easy to rectify.”
Nonetheless, we’re off and running with why tools that exist to detect and deter misconduct are bad. In this case, the bad will be when a student paper is - at some point in the future - flagged for likely being AI-generated. Then, says Dr Foltýnek, a lecturer at Masaryk University:
“There is no source document to verify,” he explained. “The teacher cannot prove anything, and the student cannot defend themselves. The only thing the teacher knows is that this particular sentence or passage looks similar to what AI would generate.”
I don’t see the problem. Just like a Turnitin report of 30% should be instructive and indicative of inquiry, so should a flag of suspected AI-created text.
Ideally, and fairly easily, a teacher could ask ChatGPT to answer the question and compare the texts. Or, better yet, they would have done this beforehand. Or maybe the instructor can ask the student for an oral follow-up conversation, or ask them to sit for a brief hand-written exam. Or compare the flagged text to previous examples of the student’s writing. Or just ask the student about it. In cases we’ve already seen, a simple conversation can result in a confession (see Issue 176). None of those options is available without some alert, some insight as to a potential problem.
Way worse, the THE piece then goes on to quote Jesse Stommel, assistant professor at the University of Denver, who:
agreed that plagiarism detection tools had been “plagued” by false positives and there was no reason that the same will not be true of AI detectors.
First, no. There is zero evidence of this. And, again, when it does happen, it’s almost always human error and “easy to rectify.”
More importantly, a pro-tip, when you’re writing a story about academic integrity, maybe don’t quote the guy who’s spoken at multiple Course Hero events (see Issue 137 or Issue 92). Or the guy who’s publicly defended Course Hero’s leaders. Same guy, by the way.
Stommel went on to tell THE:
This, he said, neglected the fact that “when students cheat, it’s usually unintentional or non-malicious” and such initiatives will only fuel “a culture of suspicion in education…driven all too much by corporate profit”.
Again, no. Students make mistakes. Those are learning opportunities, the very essence of learning. That’s not cheating. Cheating is getting ChatGPT to write your paper and turning it in. That’s not unintentional. That’s cheating. Conflating the two is dishonest.
And, not for nothing, the guy who regularly appears with Course Hero, the billion-dollar cheating profiteer, is complaining about “too much corporate profit.” Spare me.
The THE story rights itself somewhat by interviewing folks who are actually trying to stop cheating, to stop students from using AI to cheat. And for pointing out - yet again - that GTPZero, one of the new anti-AI tools, is not very good.
But that’s not nearly enough. Starting with an “easy to rectify” human error in a story about technology, and quoting allies of cheating providers who minizine cheating is bad. It just is.