Judge: Chegg Must Face Cheating Claims in Court
Plus, University of Texas won't comment on cheating with AI. Plus, George Washington University to adjust integrity panels to avoid burnout.
Issue 280
Subscribe below to join 3,774 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 17 amazing people who are chipping in a few bucks via Patreon. Or joining the 33 outstanding citizens who are now paid subscribers. Thank you!
Federal Judge: Chegg Must Answer for Cheating, Their Misstatements
Scienter.
That’s what a federal judge in California ruled that Chegg probably had when it made statements about cheating, claiming that misconduct was rare and that Chegg did not sell cheating services.
I looked it up. But I did not have to because in his ruling the Judge wrote:
Scienter is “a mental state that not only covers intent to deceive, manipulate, or defraud, but also deliberate recklessness.”
Yikes.
Deceive. Manipulate. Defraud. Deliberate recklessness. Chegg.
The ruling was on Chegg’s effort to have an investor fraud lawsuit dismissed. Chegg lost. The case, the Judge said, will go forward. This is a different suit than the one in Connecticut that we covered in Issue 279.
In academic integrity, the ruling is massive, meaning that lawyers will have the right to discovery, to depose Chegg executives, and review documents and data. There was some legal coverage (subscription required).
The coverage, from Law360, carries two good summaries, including:
U.S. District Judge P. Casey Pitts said Monday the suit sufficiently shows the defendants acted with scienter by making false statements.
And:
A California federal judge said online education company Chegg Inc. must face a proposed class action alleging its share prices benefited from students using the platform to cheat during the COVID-19 pandemic, saying the investors sufficiently show the defendants knew share prices would fall when online schooling became less prevalent.
In short, Chegg executives dismissed cheating activities that were fueling the company’s growth and revenue. Instead, they told the public that Chegg’s profits were normal and part of a global shift to online learning. They said that they were making money because colleges were not reaching students correctly.
Even though some people believed that, it was never true. Chegg is a cheating engine that profited immensely from the lack of supervised instruction and assessment during Covid. Once instruction shifted, it was inevitable that cheating would too. The lawsuit alleged that Chegg leaders knew this. But said otherwise.
So, the big news is that it appears Chegg will have to answer for its cheating products and profits in court. There will be an inquiry and a public record. But in the meantime, some parts of the Judge’s order are simply exemplary. He ruled, for example:
the Court can draw inferences from plaintiff’s circumstantial evidence to find that cheating on the platform in at-home learning environments ultimately fueled Chegg’s growth.
Cheating fueled Chegg’s growth.
Yup.
And:
Though plaintiffs do not show the exact percentage of subscribers who were cheating on Chegg’s platform, they present compelling empirical evidence of substantial cheating during the class period
Substantial cheating on Chegg’s platform.
Yup.
Again, this ruling means that attorneys for the shareholders can query Chegg’s data and review its documents. That may give us a clear picture of how much cheating Chegg was selling, how long they knew it, and what they did about it — which has been nothing at all. Sorry, to be fair, they have made it easier to cheat with Chegg and not get caught (see Issue 152 or Issue 256).
One of the core findings from the Judge was that the shareholders proved enough that cheating was common on Chegg. For example, the Judge wrote (legal citations removed for ease of reading):
plaintiffs provide comprehensive evidence from various universities and faculty to support the allegation that cheating occurred nationwide using Chegg. For example, Dean Blandizzi at UCLA tied “raining cases” of academic integrity violations to Chegg, stating that the platform allowed students to post “actual questions of current homework assignments and exams.” An Assistant Professor at UCLA further wrote “my midterms have been compromised severely by Chegg.com. Nearly all questions have been posted, sometimes many times.” A Professor at Georgia Tech similarly stated that “the questions to my … take home exam were on Chegg.” Similar accounts of cheating via Chegg were noted by faculty at the University of Nebraska, with students “submitting answers completely copied from the solutions provided by Chegg,”; at California State University, where a student “posted [an] exam to ‘ask an expert’ during the exam window,”; and at the United States Air Force Academy, where 249 cadets purportedly used Chegg to cheat.
While the defendants may be correct that the cheating represents only a small fraction of the total students at these schools, the evidence demonstrates that a substantial number of students subscribed to Chegg were cheating via Expert Q&A. Statements by high-level faculty at various academic institutions similarly suggest that cheating on Chegg was quite common at these schools. A Director at Texas A&M estimated that 80–85% of the massive increase in cheating cases involved Chegg.
Yup. Cheating occurred nationwide with Chegg. A substantial number. What’s listed there isn’t 5% of it. You read The Cheat Sheet, so you know.
And Chegg knew it too. They denied it. Over and over again.
I’ll also share that the Judge cited a test by the plaintiff’s attorney that showed:
an empirical analysis by lead counsel showing surges in expert questions during the class period and finding that approximately 25% of these questions exhibited indicia of cheating.
That indicia, it’s specified later, were questions that included terms such as “final exam” and “graded assignment.” In other words, the 25% of questions that looked like cheating is certain to be embarrassingly low. I mean, 25% is scandalous. But my wager is that the percentage of questions asked on Chegg that were cheating is closer to 90%. Now maybe we will know.
The really juicy parts of the Judge’s ruling included this (citations removed):
In further support of their argument, plaintiffs also present testimony from former employees demonstrating that discussions about cheating on the platform were regularly held at company meetings. For example, FE1 noted that “our CEO would sometimes talk about [student cheating] during all-hands [meetings].” These meetings were attended by individual defendants Rosensweig, Brown, and Schultz. Another former employee noted that “executives distributed a Forbes article discussing widespread student cheating using Chegg to employees.” Referring to the Forbes article, FE7 stated that “people weren’t surprised” because “the [cheating] issues were absolutely well understood.” FE4 even noted that Chegg “didn’t really put any effort into stopping [cheating] because it was putting money into their pocket.”
And that:
plaintiffs allege that FE3 said that Expert Q&A was “the cheating business”
I don’t know what more I can say.
No way Chegg wants this to go to court. No way they want someone digging around in their e-mails and actual data. When the truth comes out, they won’t recover.
Giving Up on Texas
In Issue 274, I wrote about the shoddy reporting on academic misconduct over at Inside Higher Ed (IHE).
It was not the first time I’d pointed out such shortcomings at IHE. But in that update, I noted that the article in question reported that University of Texas, at Austin had apparently advised faculty not to use AI similarity detectors. Or, at least that’s what IHE said.
Given that it was IHE, I wanted to check. So, I reached out to the media relations team at UT Austin to confirm or clarify whether they had advised or directed faculty to not check student papers for AI-generated text.
I e-mailed the University on February 13, followed up on February 26, and again on March 5. I called on February 27, again on February 28, and again on March 5, leaving voicemail all three times. On March 6, I e-mailed Emily Reagan, the:
Vice President, Chief Marketing and Communications Officer at The University of Texas at Austin
No one has responded to what was a yes or no inquiry on something as important as academic integrity. Your guess as to why is probably as good as mine is.
Though I do know that American schools do not ever want to talk about cheating — not ever. When anyone asks about academic misconduct, it’s suddenly as if no one works at these schools. The most you ever get is a warm oatmeal statement about how seriously they take academic integrity.
In any case, I tried to see what was going on at UT. But I have no idea.
George Washington to Shuffle Academic Integrity Panels to Prevent Burnout
Amid a spike in the number of academic integrity cases at George Washington University, faculty and students leaders are considering revising the composition of integrity review panels to limit burnout among faculty.
From the coverage by the school’s student paper:
Senators also approved an addendum to the Code of Academic Integrity that temporarily revised the composition of academic integrity panels. Senators approved a measure during the meeting that defines a full academic integrity panel as three members — one must be a student and one must be a faculty member — from the pool of trained members. Before the change, a full panel consisted of two faculty and three students.
The Student Government Association also passed the temporary change at their meeting last Monday, meaning the addendum has been officially adopted into the code. Christy Anthony, the director of the Office of Student Rights and Responsibilities, proposed academic integrity panel mitigation strategies in December in response to a 313 percent average increase in reports of academic integrity violations when compared to the past two fall semesters.
Yes, you read that right - a 313% increase in cases.
The coverage continued:
Sarah Wagner, a faculty senator and professor of anthropology, said the temporary changes to the number of panelists is “not sustainable” for a long period because fewer faculty will be involved.
“There’s an intensity of the caseload which, combined with a limited number of panelists, means that this interim mitigation strategy will likely lead to, believe it or not, burnout,” Wagner said during the meeting.
The strain that academic misconduct places on institutions is almost never considered when people think about cheating. The burdens are real, significant and increasing — not just in volume but also in complexity. Something we should keep in mind the next time someone says that cheaters only cheat themselves.