Chegg's Amazing Australian Triple Tumble - TikTok, 77% and TEQSA
Plus, some really good writing on the ChatGPT assault. Plus, more solid media coverage out of western New York.
Issue 234
To join the 3,517 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 14 amazing people who are chipping in a few bucks a month via Patreon. Or joining the 18 outstanding citizens who are now paid subscribers. Thank you!
Chegg in Australia - A Shocking Simultaneous Play, with Three Moving Parts
There simply wasn’t enough room in either headline for the meat in this must-read story from the Sydney Morning Herald (SMH).
Oh my.
This play has three plot lines.
Plot One
As you read this over, keep in mind that plot one is that Chegg is under public inquiry by Australia’s higher education regulator, TEQSA, the Tertiary Education Quality and Standards Agency (see Issue 210).
That Issue, by the way, has the e-mail address where you can tell TEQSA your thoughts on Chegg, should you have any. Fine. It’s:
concerns@teqsa.gov.au
As a reminder, TEQSA has the power to block internet access to academic fraud providers nationwide, as they have already done for some 150 essay mills (see Issue 142). Obviously, should TESQA deem Chegg a cheating site, which it is, that would be very bad for Chegg and very good for academic integrity.
That’s plot one. Keep it in mind as you read over plot points two and three - that those are going on while plot point one is active and unfolding.
Plot Two
The Sydney Morning Herald reports that Chegg has hired Australian TikTok influencers to market its products to students. The opening graph:
A study help website linked to hundreds of contract cheating allegations is targeting Australian students through sponsored social media posts, including some filmed by influencers on university campuses.
Filmed on university campuses. Wow.
The reporting says the influencers, “each have between 1.9 and 2.2 million followers” and are “promoting Chegg.com’s Study Pack product.” That’s the answers-on-demand service that is Chegg’s core business.
The SMH also reports that at least one of Chegg’s new paid influencers:
charges between $5,000 and $12,000 for TikTok packages.
Cheating companies paying social media influencers is not new (see Issue 196). It’s not even new for Chegg, which has paid student athletes to promote its wares (see Issue 111). So has cheating provider Quizlet (see Issue 123).
So, paying influence peddlers to sell and legitimize your services - not a new deal. Literally doing it on a college campus, that’s gutsy. Then doing that while the national regulator has called for input on your company, that’s another level altogether. That’s hopping on eggshells.
Plot Three
While Chegg is shelling out for influencers and TEQSA is investigating, the SMH also has what I think is rather shocking new reporting regarding the University of New South Wales:
According to its 2021 academic misconduct report, 257 of the 335 cases of contract cheating involved students posting questions to be answered on Chegg.com.
Crikey.
That’s 76.7%.
I mean, I knew Chegg was leading the universe in facilitating and profiting from academic misconduct, but 77%? Even I am stunned.
More from SMH:
The University of NSW said that three in four cases of contract cheating – when students outsource their work – were linked to the website in 2021, while the University of Sydney says it was facing increased reports of cheating through the website.
If you’re Chegg, I cannot fathom why you’d want that in the paper right now. But that’s what happens when you pay social media stars to sell your services - newspapers ask questions. Or at least they do in Australia.
So, to summarize, if they were trying to be banned in Australia, Chegg could not have planned this any better.
Odds & Ends
To their further credit, SMH also quoted an actual expert in academic integrity, Phillip Dawson of Deakin University, who said:
Chegg was working hard to position itself as a legitimate study help website.
“I think there is a danger in marketing themselves as a legitimate support service that students think using them is OK,” he said.
True, they are. And true, there is.
SMH says Dawson continued:
that while Chegg could not currently be labelled a cheating website, it was not disputed that a lot of students used Chegg to cheat.
It is not disputed that a lot of students used Chegg to cheat. And I understand that academics don’t want to be in the business of calling out specific companies. So, I’ll do that for Phil - label or not, Chegg is a cheating website.
Meanwhile, Chegg told SHM:
it takes academic integrity seriously
And it repeated that:
it is a legitimate study help website and that the “vast majority” of its subscribers use it to learn.
This isn’t the first time Chegg has used this “vast majority” line, to which I always ask - how do they know? I mean, if Chegg knows who is using the service for legitimate uses, they know who is not. Right? And if that’s the case, what have they done about it?
It’s rhetorical.
In any case, Chegg may have really stepped in it in Australia. Let’s hope so. And serious kudos to SMH for some great reporting.
More Good Writing, Coverage on Academic Integrity
The Hill isn’t necessarily the place you’d look for good coverage on academic integrity. And technically, this piece is an opinion offering and not news coverage - but still, it’s pretty good.
It’s by Mark Massaro, who teaches “composition and literature courses at a state college in Florida.” He says:
educators are facing an existential challenge in the form of a war against generative artificial intelligence (AI) technology.
With flair befitting a composition teacher, he adds:
AI has infected higher education like a deathwatch beetle, hollowing out sound structures from the inside until the imminent collapse.
Of colleagues, he writes:
One colleague said that students are now taking their AI-written essays and running them through a “rephrasing” generator, which rewords the uploaded essays with synonyms to mask the original computerized nature of the product.
They are indeed. If you’re not familiar with it, Google Quillbot, which is owned by Course Hero (now Learneo). And this process does confuse many AI similarity checkers.
Massaro continues:
An older educator, the one who barely uses the required LMS software, said that she’s merely requiring the students to sign formal pledges, promising not to cheat.
Probably actually worse than useless.
He gives this example too, of what teachers are facing:
Back in the classrooms, we are alone in the fight, with only our deductive skills and undeveloped AI detectors to guide us. Over the summer, one of my students submitted an essay, ironically enough, about the battle against AI in academia. The student cited information from articles published in the New York Times and Rolling Stone, complete with author names, in-text citations, publication dates and solid MLA-formatted entry on the works cited page. It was beautiful. The only problem? One Google search proved that those articles and authors never existed. When confronted over email, the student quickly began to send random articles with different titles and authors and offered an elaborate story about how the articles saved “wrong” on her computer.
I feel guilty sharing more of it, so please go read it. Give The Hill a click for publishing it.
The last of it I will share here is this:
Dangerously, the outcome in these situations comes down to the effort and experience of the individual educator. This creates the possibility of cheaters slipping through and honest students being accused of cheating. This is unmitigated chaos, and there are no solutions being offered other than to use our best judgment.
I’ve got 2,500 words on why I think this coming down to the “effort and experience of the individual educator” is a good thing, should anyone want it. But either way, he’s right that this is chaos. We are not in a good place. And the path ahead looks no more promising.
Buffalo News on ChatGPT and Cheating
The Buffalo News also has a long, deep story on the use of ChatGPT in higher ed, including several sections on misconduct.
It’s worth a review because it’s good reporting from a local paper - the kind we don’t see too often anymore. And also because it’s a great illustration of why and how generative AI is a really tough nut to crack, even when you want to. And some people simply don’t want to.
From the coverage of local, western New York colleges and universities:
an economics professor at D’Youville, noticed during the start of spring semester earlier this year that students would typically submit one to three paragraphs with some depth for his discussion board assignments.
By week four, some of those students were submitting five to six paragraphs of perfectly worded text that was far more thorough.
“I was like ‘Whoa! Either all of these students are acing my class and they’re doing fantastic and I have to be teaching this extremely well,’” he said. “’Or, they are really using a lot of ChatGPT.’ “
The professor, the reporting says, started to ask if AI-generated text was covered in the college’s integrity policies. It was not.
The Buffalo News also re-reports, and misreports, coverage from the Washington Post:
that a popular detection tool, in place at more than 10,000 secondary and higher education institutions, can falsely accuse students of using AI-generated text for their assignments.
Nope. AI detection systems do not accuse students. If that is happening, there is a serious problem. Nonetheless, it’s a myth that, as the News reports, is shared by several people at the local schools, including:
Daniel Higgins, director of journalism programs and an assistant professor at Canisius University, said he has no idea about how to detect AI-generated text when grading student submissions. He knows detection tools can lack reliability, so he doesn’t use them, and discourages colleagues from doing so, too.
“I don’t even know if I will try to figure out if students cheated using AI,” Higgins said. “I don’t want to spend a lot of time being a detective. I want to teach, and that’s not teaching.”
I guess it’s one thing to not understand how AI similarity systems work or to not learn how they need to be used. It’s another thing entirely to not care. Read his quote again. He doesn’t check for AI - in journalism! - not because the tools are unreliable, but because he doesn’t want to spend the time.
Nice.
But the story continues that, at the University of Buffalo:
Kelly Ahuna leads such detective work.
Ahuna, director of the UB Office of Academic Integrity, and her team have cracked down on misuse of chatbots for cheating on assignments.
She finds that faculty members are generally good at detecting AI-generated text in assignments
When they want to spend the time, that is. I know that was a different school, but it upset me.
There’s more. It’s good journalism. And I will leave you with this, from the article:
“It helped me pass high school,” said Roberto Jimenez, an incoming UB freshman. “It was basically my personal writer.”
In high school, Jimenez used ChatGPT for a last-minute essay in his economics class that he did not want to take the time to type out completely, he said.
Instead he typed the essay question into ChatGPT with what he had written so far. The program completed the rest of the essay and he edited the AI-generated text to sound like him before submitting the essay to his teachers.
He used ChatGTP because “he did not want to take the time.”
There’s a journalism class over at Canisius University that Mr. Jimenez should explore. I think he’d really get along with his professor.