The AP Digs Into Cheating, ChatGPT
Plus, big moves in the business of academic integrity. Plus, two interesting stories out of Canada. And three new nominees for quote of the year.
Issue 236
To join the 3,542 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 14 amazing people who are chipping in a few bucks a month via Patreon. Or joining the 19 outstanding citizens who are now paid subscribers. Thank you!
AP Covers Cheating and Assessment with GPT
The Associated Press ran a story recently on cheating in college with - what else? - ChatGPT, and efforts by professors to address it.
The coverage starts with the predictable, sharing the frustration of professors who are catching students cheating with AI text generators such as ChatGPT:
When philosophy professor Darren Hick came across another case of cheating in his classroom at Furman University last semester, he posted an update to his followers on social media: “Aaaaand, I’ve caught my second ChatGPT plagiarist.”
Friends and colleagues responded, some with wide-eyed emojis. Others expressed surprise.
“Only 2?! I’ve caught dozens,” said Timothy Main, a writing professor at Conestoga College in Canada. “We’re in full-on crisis mode.”
The piece reports a bit later that:
In his first-year required writing class last semester, Main logged 57 academic integrity issues, an explosion of academic dishonesty compared to about eight cases in each of the two prior semesters. AI cheating accounted for about half of them.
Fifty-seven in one class.
In addition to being noteworthy because it’s the AP and cheating doesn’t often get such top-line attention, the article itself has a few other noteworthy nodes. For example, it name-checks Chegg:
Chegg Inc., an online company that offers homework help and has been cited in numerous cheating cases, saw its shares tumble nearly 50% in a single day in May after its CEO Dan Rosensweig warned ChatGPT was hurting its growth. He said students who normally pay for Chegg’s service were now using ChatGPT’s AI platform for free instead.
That’s delicate.
It also mentions a coming shift away from online assessments generally:
“There is going to be a big shift back to paper-based tests,” said Bonnie MacKellar, a computer science professor at St. John’s University in New York City. The discipline already had a “massive plagiarism problem” with students borrowing computer code from friends or cribbing it from the internet, said MacKellar.
And:
Ronan Takizawa, a sophomore at Colorado College, has never heard of a blue book. As a computer science major, that feels to him like going backward, but he agrees it would force students to learn the material. “Most students aren’t disciplined enough to not use ChatGPT,” he said. Paper exams “would really force you to understand and learn the concepts.”
Feels right. And I’m putting that quote from Takizawa on the shortlist for academic integrity quote of the year.
The story also shares:
Are AI detectors reliable? Not yet, says Stephanie Laggini Fiore, associate vice provost at Temple University. This summer, Fiore was part of a team at Temple that tested the detector used by Turnitin, a popular plagiarism detection service, and found it to be “incredibly inaccurate.” It worked best at confirming human work, she said, but was spotty in identifying chatbot-generated text and least reliable with hybrid work.
Man, it’s starting to really annoy me when people do not understand what these detectors do and how they work.
More from the AP story:
Will students get falsely accused of using artificial intelligence platforms to cheat? Absolutely. In one case last semester, a Texas A&M professor wrongly accused an entire class of using ChatGPT on final assignments. Most of the class was subsequently exonerated.
We covered this case in Issue 211. And please note the word, “most” in the above.
What’s disappointing is that the paragraph about AI detectors and the graph about false accusations at Texas A&M were next to each other - giving the impression that they were related. In reality, the Texas A&M incident spun up false accusations because the professor did not use an AI detector. Smushing them together is sloppy and misleading.
I’m really happy about the high-profile coverage of misconduct and will leave you with this cautionary tale, also from the AP coverage:
Arizona State University sophomore Nathan LeVang says he doublechecks all assignments now by running them through an AI detector.
The point is supposed to be about students being “paranoid” of being falsely accused of using AI.
But it does not take a genius to see that students running their assignments through AI-checkers before they turn them in is a problem. I’d wager that 95% of students who do so aren’t checking to see if they’ll be falsely accused, they’re seeing how to make enough changes to their plagiarized work to bypass detection.
I mean, who walks into a bank and says, ‘I want to test the security cameras because I don’t want to be falsely accused of being a bank robber’? I’ll tell you who wants to test bank security cameras - bank robbers.
Big Moves in the Business of Academic Integrity
There are two new business deals in academic integrity to share - one very big, one big.
The very big is that, according to press announcements, Meazure Learning - which houses proctoring provider ProctorU - has acquired proctoring and assessment provider Examity.
I’m not sure where these companies were in terms of market size and share, but I feel as though they were both in the top five. If a reader knows otherwise, or more specifically, please help me correct and clarify.
Either way, in the world of assessment security and delivery, that’s a big move. Sorry, very big.
The big news is that, according to public announcements, Inspera has hammered out a deal with proctoring provider Proctorio. Inspera, based in Europe, develops and delivers digital assessments. The deal will:
add Inspera’s new similarity and AI generated text detection to Proctorio’s suite of solutions to help prevent plagiarism.
Big.
If you know of comings or goings in the business of assessment, security, or integrity, and you’d like to see that knowledge shared here, please drop me a line.
Two Shareable Stories from Canada
One story out of Canada is pretty simple - according to a survey, most Canadian college students say using AI on coursework is cheating. From the story:
More than half (52 per cent) of Canadian students aged 18+ surveyed by KPMG are using generative AI to help them in their schoolwork, despite 60 per cent feeling that it constitutes cheating
That 52% is sure to be low. Anyway, no real surprise.
And I love this bit:
Almost nine in 10 say they saw the quality of their schoolwork improve after using generative AI and nearly 70 per cent say their grades improved.
Their grades went up after they cheated? Shocker.
The other story is a bit deeper, though it doesn’t cover much new territory. The headline is direct:
Plagiarism cases growing at U of Manitoba as students increasingly turn to artificial intelligence
Again, shocker.
And it has two more nominees for academic integrity quote of the year, in what is shaping up to be a barnburner of a category. From the CBC story above, here’s one:
"If you're not going to put in that work, that has consequences. Maybe not right now, but one day, when you are an engineer having to build a bridge … you won't necessarily know those skills."
That’s from Francois Jordaan, an “academic integrity specialist” at the University of Manitoba.
He’s right, of course. But can we start by asking how it is that a college can issue an engineering degree to someone who doesn’t necessarily have the skills to do the job?
Maybe I’m off, but if he’s in charge of academic integrity at the University, preventing that from happening feels like his job. Passing that off to the “but one day” defense feels like he’s not doing it.
I mean, what are we doing?
But the other quote may be better:
"Google and YouTube are more confusing," said first-year information technology student Shutkrati Tyagi, who recently came to the U of M from India. "ChatGPT can give more specific information."
Seriously, what are we doing?
Class Note
As you may have noticed, but probably didn’t, there was no Issue of The Cheat Sheet this past Tuesday. I didn’t feel especially well and struggled to pull it together. My apologies. We’re usually reliable on a Tuesday/Thursday pace.