OpenAI, Creator of ChatGPT, Partners with Chegg
Plus, Proctorio and CrossPlag sign a deal. Plus, a podcast interviews an opinionated academic integrity writer. Plus, using ChatGPT to cheat in Wales.
Issue 203
To join the 3,187 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 13 amazing people who are chipping in a few bucks a month via Patreon. Or joining the seven outstanding citizens who are now paid subscribers! I don’t know how that works, to be honest. But I thank you!
OpenAI and Chegg - The Future of Cheating That No One Wanted, Is Here
GSV, which is an investor in cheating stalwart Course Hero and other misconduct makers (see Issue 80), recently hosted its annual conclave, which Arizona State University inexplicably co-brands.
At the event, GSV’s leaders, not content to back just a few cheating enablers, offered the spotlight of their main event stage to Chegg (see Issue 200). A few days later it became known that Chegg would share the stage with Sam Altman, the notable founder of OpenAI, the maker of ChatGPT and GPT-4. Those products, in case you’ve been on extended vacation in a place without Internet access or shortwave radio, have already proven to be a potent new source of nearly unlimited academic misconduct.
It was not as odd a pairing as some may have imagined.
For one, OpenAI’s detection system, which claims to be able to tell if text was created by AI, has been terrible. Not only does it not work well, it’s filled with disclaimers about how it does not work.
People who know and understand these things have told me that OpenAI’s detection systems seemed intentionally designed to be awful. Many - though not all - independently developed AI detections systems are much better. And by better, I mean they actually work. If you are a cynic like I am, it’s not too hard to believe that OpenAI launched a defective detection system just to tarnish the idea of detection systems - a ruse that far too many in the media have swallowed.
To me, the potential coupling of OpenAI and Chegg would be a marriage of the world’s most potent plagiarism engine with the world’s largest cheating provider. For those who care about learning or integrity or the value of educational credentials, it would be a toxic twosome, to be sure.
And sure enough, with GSV’s blessing and pageantry, Chegg and OpenAI announced a new collaborative product, CheggMate.
I have not had time to get into it deeply, but it’s just as disgusting as you probably imagine. For one, it’s highly branded “homework help, powered by AI” and everyone knows what that means. People say “homework help” when they mean cheating.
They call it “the future of learning.”
The website says:
CheggMate will combine the power of GPT-4’s advanced AI systems with Chegg's extensive content library and subject-matter experts
As mentioned, I have not tried to pull this apart yet. I am not sure I can to be honest, it says there’s a waitlist. But a few things jump out already.
One of the first things I noted was Chegg saying the new product will include or feature:
Proprietary Library
Leveraging the billions of pieces of content and expert solutions in Chegg’s vast library
Proprietary - really? Fascinating. I think I hear the sounds of lawyers putting on battle gear (see Issue 55). I tell you one thing, if I found so much as one single phrase of my intellectual property on Chegg’s system, being sold without my permission, to facilitate cheating, I’d sue faster than their lawyers can spell C-H-E-G-G.
Maybe I’m biased by my own similar, almost-incident with Chegg twin Course Hero (see Issue 155 and Issue 158). But anyway, I do think that “proprietary” thing is going to get them in trouble. And fast.
The new venture’s website also says:
Trained by 150K+ subject matter experts who are quality control editors for accuracy
This time, the “trained by” sticks out. I think this likely means that Chegg turned over its paid-for test and homework answers to OpenAI - the ones that Chegg pays low-wage “experts” to answer on demand, usually in minutes. Chegg would own those, maybe. And assuming Chegg’s answer-sellers are right most of the time, it likely would improve OpenAI’s accuracy, which has not been great.
But again, if this is what happened, and a court finds that the answers to academic questions are protected as much as the questions are (see this in Forbes), turning those over to OpenAI could sink Chegg into deeper, hotter legal water. Maybe OpenAI too.
But that’s quite speculative.
Back to CheggMate, I also note that the website says the new bangle is or has an:
Adaptive tutor
Make learning easier with a real-time AI tutor that remembers your conversations, adapts to your learning needs and is always available.
Which is interesting because it more or less puts Chegg back in the tutoring business, though with bots. Chegg, the “education company,” closed its actual tutoring service a few years ago. Choosing between teaching and selling answers fast, Chegg chose the answer selling.
Two final thoughts. Fine, three.
I don’t really expect this Chegg/OpenAI to change Chegg’s bottom line much. Chegg already tries to serve customers its canned, already-answered results before engaging a human cheater. Using AI to make those faster or feel more friendly won’t necessarily limit their human capital expenses. Though it could open a new product line that Chegg can upsell. Still, I’m not sure how Chegg will make the case that paying Chegg for what people assume OpenAI can do for free is a viable long-term approach.
I also don’t fully get why OpenAI would publicly launch a joint venture with a company recently sanctioned by the FTC for massive data breaches that exposed millions of student records (see Issue 161). But I’m not counting that as one of my final three points.
Two is that I understand why many academic leaders and educators wanted to view ChatGPT, GPT-4 and OpenAI as a learning tool. I think it is. And will be. But it was also always clearly a cheating tool - and a massive, unprecedented one at that. Months ago, I called it CheatGPT.
Now, if nothing else, OpenAI’s role as a willing, open and invested partner in cheating is on the table. It has chosen sides. Having to assume that OpenAI is getting some financial reward from its Chegg deal, it’s now a cheating profiteer by definition.
And finally, though we don’t have visibility into the Chegg/OpenAI information sharing flow, it’s very possible that educators who welcome ChatGPT into their classrooms are fueling Chegg’s cheating engine, boosting its efficiency and profitably and indirectly undermining academic rigor at a global scale.
So, you know, good news.
On Cue: BBC Reports on UK Students Using ChatGPT on Graded Essays
The BBC reported recently that university students in Wales have admitted using ChatGPT for essays they are submitting for grades. The clear headline:
ChatGPT: Cardiff students admit using AI on essays
Yup. Nothing new.
From the article:
Cardiff University students said they had received first class grades for essays written using the AI chatbot.
The details are quite familiar:
Tom, who averages a 2.1 grade, submitted two 2,500 essays in January, one with the help of the chatbot and one without.
For the essay he wrote with the help of AI, Tom received a first - the highest mark he has ever had at university.
In comparison, he received a low 2.1 on the essay he wrote without the software.
"I didn't copy everything word for word, but I would prompt it with questions that gave me access to information much quicker than usual," said Tom.
The BBC also reported that:
A recent Freedom of Information request to Cardiff University revealed that during the January 2023 assessment period, there were 14,443 visits to the ChatGPT site on the university's own wi-fi networks.
More than 14,000 visits in one month to ChatGPT from a school’s own network is something. But not compared to the University of Warwick, which recorded a stunning 764,461 ChatGPT visits the same month (see Issue 199).
The article also interviews another student, John, who used ChatGPT for academic work and reported:
Both students said they do not use ChatGPT to write their essays, but to generate content they can tweak and adapt themselves.
As for being caught, John is certain that the AI influence in his work is undetectable.
"I see no way that anyone could distinguish between work completely my own and work which was aided by AI," he said.
However, John is concerned about being caught in the future. He said if transcripts of his communication with the AI network were ever found, he fears his degree could be taken away from him.
"I'm glad I used it when I did, in the final year of my degree, because I feel like a big change is coming to universities when it comes to coursework because it's way too easy to cheat with the help of AI," he said.
If records of his interactions with ChatGPT were ever discovered, this student thinks he may lose his degree. Interesting. He also says it’s “way too easy to cheat” with ChatGPT. Interesting.
Meanwhile, Cardiff University offered the obligatory:
Cardiff University said it took allegations of academic misconduct, including plagiarism, "extremely seriously".
Uh huh.
And:
"Maintaining academic integrity is our main priority and we actively discourage any student from academic misconduct in its many forms."
Sure it is. Nothing better than active discouragement. Good job.
Proctorio and CrossPlag AI-detection Ink Deal
Proctorio, a large and highly visible assessment proctoring company and CrossPlag, the plagiarism and AI-detection company in Europe, have created a partnership.
According to the news release, the arrangement will create:
a comprehensive academic integrity solution for educational institutions worldwide
Being the only provider that, as far as I know, can offer remote exam monitoring and plagiarism detection and AI checking in one place - the “comprehensive” seems right.
More from this announcement:
To facilitate the partnership and ease administration and educator workflows, Proctorio and CrossPlag have seamlessly integrated their respective technologies, making it accessible to millions of educators and students around the globe. The combined solution is already available in various learning management systems, streamlining access and use for users of both platforms.
Millions of students and educators and already available in LMS. Okay then.
As the threats evolve and become more potent, more clandestine, the solutions were going to as well. Integrations such as these were probably inevitable.
It’s also worth mentioning that CrossPlag’s technology specifically mentions the ability to flag AI-assisted paraphrasing of the kind done by Course Hero’s QuillBot:
CrossPlag's advanced technology not only identifies instances of plagiarism with high accuracy and low false positive rates but also detects AI-generated content, even when run through paraphrasing services typically used to avoid detection.
That’s big.
Podcast Features Opinionated Academic Integrity Writer
Tests and the Rest podcast, a show focused on college admissions and related topics, recently released an interview with a pontifical writer who covers academic integrity - like all the time.
It’s me. They interviewed me.
We discussed academic integrity in the reality of AI, cheating in general and what educators may be able to do about it.
If you’d like to listen, the link is here.