Market (Finally) Starting to Bet on Chegg Implosion
Plus, a note from UpGrad. Plus, Inside Higher Ed does disservice to integrity reporting. Again.
Issue 311
Subscribe below to join 4,131 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 39 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month. Thank you!
Paid Subscription Pitch, The Far Less Interesting Sequel
In the last Issue, I threw out the regular, annual pitch for paid subscriptions, which I’d been doing about this time every year.
Well, it did not work. We picked up approximately zero new subscribers — paid or otherwise. So, rather than believe that’s because no one thinks that academic integrity, or my coverage of it, is worth supporting, I believe the lack of response and absence of results is due to where I placed the pitch — at the bottom of the Issue, in the Class Notes section.
So, I’m trying a new approach — putting the annual shameless pitch up here.
Right now, I’m honored to have 39 paid subscribers to The Cheat Sheet. And another 16 people contributing on Patreon. Together, that’s about 1.3% of our total subscriber base and less than 1% of the total readership. To be honest, maybe that’s a great rate. I’m not sure.
But if you support what we’re doing — trying to do — and can swing the $8 a month for a paid subscription plan, I ask you to give it some thought. Your support is appreciated. Here some links to chip in, if you are willing and able:
Now, back to your regularly scheduled programming. I apologize for the interruption.
Market Starts to Place Bets in Favor of Chegg Collapse
According to this report from MarketBeat, more and more investors are starting to put their money on a continued freefall of giant cheating provider, Chegg.
In case you’re new to following Chegg, the company started out as a textbook rental company and, in the process, became listed on the New York Stock Exchange. But now, Chegg earns nearly all of its enormous revenue by selling answers to homework and test questions. That, by the way, is what some of us call cheating.
During the pandemic, when cheating spiked due to online classes and lack of exam supervision, Chegg stock reached $113 a share. Today, torn apart by legal challenges, checked by better cheating awareness and exam security, and threatened by generative AI which gives homework and test answers for free, Chegg is trading at a measly $1.84 a share. That’s an epic decline of 98%. And frankly, I am surprised anyone is paying $1.84 a share.
But that’s not the news — increasingly, more investors are betting that Chegg’s value will go even lower, shorting the stock. From MarketBeat:
Chegg, Inc. saw a significant increase in short interest in August. As of August 31st, there was short interest totalling 13,960,000 shares, an increase of 6.9% from the August 15th total of 13,060,000 shares. Approximately 14.2% of the company's stock are sold short. Based on an average trading volume of 4,230,000 shares, the short-interest ratio is currently 3.3 days.
Yikes. The sharks are circling the boat.
I don’t give investment advice. And I do not and have never owned Chegg stock. But my 184 cents on the increase in short positions is that those rushing to cash in on Chegg’s humiliation are too late. They should have shorted the stock two years ago, about the time I wrote that Chegg was being dishonest and that their days at the top of the stock market were over.
I’ve also said, and say again, that Chegg has market value. Their vault of millions of homework answers is valuable to AI companies that want to continue feeding their models new information. Though, the Pearson legal challenge may complicate that (see Issue 55).
Anyway, I guess it’s good to see investors finally starting to realize that Chegg is in real, probably terminal, trouble. Though it’s still deeply disturbing that so many investors were just fine with the way Chegg was making money while it was making money. But now that AI is the handwriting on the wall, they are looking to squeeze profit from the death rattle. Shameless.
Interestingly, one investment house that still doesn’t care how Chegg makes its money, and does not seem to believe that Chegg is circling the drain is Boston-based Acadian Asset Management, which MarketBeat says actually recently increased its position in Chegg:
Acadian Asset Management LLC increased its holdings in shares of Chegg, Inc. by 11,517.3% during the 2nd quarter, according to its most recent filing with the Securities and Exchange Commission. The firm owned 2,154,885 shares of the technology company's stock after buying an additional 2,136,336 shares during the quarter. Acadian Asset Management LLC owned 2.11% of Chegg worth $6,807,000 as of its most recent SEC filing.
Yikes.
Acadian isn’t the only fund buying Chegg, even while the bets on failure start to pile up. But you have to wonder what it is they are thinking.
I wonder what it is they are thinking.
How to Get a PR Team Fired
So, a funny story.
A few days ago, Phalgun Kompalli, the co-founder at UpGrad, sent me a message on LinkedIn, asking if I was available to meet him in NYC. I said I was. But I also asked Phalgun if Dan Rosensweig, the former CEO of Chegg, was still on the Board of Directors at his company.
He did not reply.
The thing is, people reach out to me somewhat regularly to talk about their companies. But it’s really odd that anyone would let someone from UpGrad contact me. I mean, it’s not just that UpGrad has welcomed the former head of the world’s largest cheating provider. It’s that I’ve outright linked UpGrad to some very suspicious cheating connections and practices several times.
In Issue 272.
And in Issue 194.
In Issue 231.
And Issue 208.
Over at Forbes, I even wrote a few words on UpGrad’s built-in feature that lets students get plagiarism scores on their work before they turn it in:
upGrad apparently also has a feature in its learning management system that allows students to check their work for plagiarism before they submit it. The provider of the pre-check service for upGrad says that with it, “[students] are alerted if the similarity percentage is too high and can update their assignment accordingly.” There’s really only one reason a student would want to update an assignment to bring down a plagiarism score, or even need to check in the first place.
Let’s just say that upGrad has not been a beacon of academic rigor, integrity, or quality.
Yes, in UpGrad’s system, students “are alerted” if their papers are plagiarized too much. So they “can update their assignment accordingly.”
The provider of the pre-check service, by the way, is CopyLeaks, which also partners with Chegg and Course Hero (see Issue 208 linked above).
Anyway, I am surprised to be on UpGrad’s outreach list. In his outreach, Phalgun wrote that:
I read a few of your articles and they are very interesting and counterintuitive.
Maybe he should have read a few more.
IHE on Using AI on Multiple-Choice Tests
Inside Higher Ed, which I must underscore, has a really bad record and worldview on academic integrity, ran a piece recently on a professor at Florida State University who has developed a system he says can pinpoint the use of AI to get answers to multiple-choice tests.
It has some tidbits — and errors — worth sharing.
On the core issue of using AI on multiple-choice tests, the FSU professor says ChatGPT isn’t great at it:
The researchers found patterns specific to ChatGPT, which answered nearly every “difficult” test question correctly and nearly every “easy” test question incorrectly.
And:
“ChatGPT is not a right-answer generator; it’s an answer generator,” [Professor Kenneth] Hanson said. “The way students think of problems is not how ChatGPT does.”
Love that.
IHE also says:
If a student wanted to use ChatGPT to cheat on a multiple-choice exam, she would have to use her phone to type the questions—and the possible answers—directly into ChatGPT. If no proctoring software is used for the exam, the student then could copy and paste the question directly into her browser.
I mean, true. But there are much easier ways to do that than typing the question into your phone. And, if there’s no proctor or lock-down browser, why use ChatGPT? Just Google it. Seems like an incongruous point.
The FSU professor says, even though his detection systems work, he does not find the practice “feasible” given how much effort it takes. So, OK. Not sure why this is news.
But then there are the errors. Once again and for the millionth time, IHE does not quote a single expert on academic integrity or using AI inappropriately. It’s an omission that happens so often that it’s implausible to accept as carelessness.
The lack of information from people who actually know this stuff leads IHE to publish quotes such as this one from Professor Hanson:
Most cheating is a by-product of a barrier to access, and the student feels helpless
Hanson is a chemistry professor. And he’s wrong.
IHE also quotes the good professor:
pointing toward research that suggests students aren’t necessarily cheating more with ChatGPT. “There’s a certain percentage that cheats, whether it’s online or in person. Some are going to cheat, and that’s the way it is. it’s probably a small fraction of students doing it, so it’s [looking at] how much effort do you want to put into catching a few people.”
Two things.
One, according to the research the professor is citing, the “small percentage” is:
some 60 to 70 percent of students said they had recently engaged in cheating
Second, that’s from a self-reported survey from just 40 high schools. And since self-reported surveys undercount illicit behavior - by as much as 2.5x - the 60-70% is a low estimate. The real number is very likely 95% or more, which is why the survey in question did not show an uptick in reported cheating with ChatGPT — the dial cannot go past 100. This is not Spinal Tap. See Issue 261 for more on this.
And, fine, third — this idea that it’s not worth the effort to catch “a few people” cheating is difficult to accept. I wonder what percentage of academic and credential fraud is worth the effort. At what point would this professor say it’s worth his time to protect the honest students from having their work and scores degraded by dishonesty? Twenty-five percent? Fifty? Remember, the survey data he cited already put the number at 60-70%. But sure, “a few people.” Go ahead and believe that so you can rationalize doing nothing.
But the real malpractice here is, once again, Inside Higher Ed, which provides no context or expertise to help readers understand that these assertions are downright wrong.
If it did not happen all the time, I would not believe it.
Regarding MC tests, in fact, the student would not need to type in the questions and answers at all--4o, Claude, Gemini, and Copilot can all do OCR reliably. The student would just need to take a picture of the test and ask for OCR with answers. I see it all the time.