Cheating and Plagiarism May Not Be "Useful Categories of Pedagogic Concern"
Plus, why bad research matters. Plus, a student in Colorado on why cheating is a bad idea.
Issue 257
To join the 3,696 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 14 amazing people who are chipping in a few bucks via Patreon. Or joining the 27 outstanding citizens who are now paid subscribers. Thank you!
NYU Abu Dhabi and, You Know What? I Give Up.
It took nearly a full year since the debut of ChatGPT and the ongoing brush fire of babbel about it, but I found the absolutely worst piece of academic writing on academic misconduct and generated AI text.
It’s so bad that I’m strongly tempted to declare the competition for 2023 Academic Integrity Quote of the Year over and crown a winner right now.
I’m referring to this byline opinion piece published in Times Higher Ed (THE) by Mariët Westermann, Vice-Chancellor of New York University Abu Dhabi.
First, credit again to THE for opening their pages to this topic. Of the big three publishers in higher ed, THE is standing alone on the topic of academic integrity.
Whoops
To kick things off, Westermann slogs through a year’s worth of history of generative AI and academic actions and reactions. In that, she writes:
This spring, Turnitin started building the tool GPTZero into its standard product.
No.
I mean no.
This is a basic error of fact that unveils deep ignorance.
GPTZero is its own company, in no way affiliated with Turnitin. We’d be on better ground describing them as competitors, though even that is misleading in that GPTZero’s AI detection systems are flatly awful, as has been shown over and over and over again (see Issue 250 for one example).
A simple Google check or even Wikipedia would have shown that this statement is wrong. I don’t know how it passed editors at THE. But more on point, how does someone presented as an authority on this topic not know this?
The most simple answer is that they are not an expert.
I mean, I guess it’s kind of understandable that perhaps a Vice-Chancellor of New York University Abu Dhabi would not understand the difference between two very different companies. Maybe. But if that’s true, here’s an idea — maybe don’t try to write about it in public.
This gross display of ignorance is on par with someone writing in Bon Appetit magazine that The Palm Steakhouse started getting its steak from the day-old bin of the Burger Chef in suburban Akron, Ohio. Any semi-informed reader would have serious and reasonable questions. And not about The Palm.
Frankly, I was so stunned to see it that I had to double-check, on the off chance that I had lost contact with reality. Good news, I am not nuts. She is just wrong.
Repeated, Disputed
Anyway, having demolished her credibility on this topic, she continues:
As other universities also learned, the NYUAD researchers found that both tools returned too many false positives to make them a reliable weapon against deceptive use of AI in assignments. Moreover, further experiments confirmed that it was easy to fool the detectors in the other direction: adjusting ChatGPT answers just a little to make them more humanlike or idiosyncratic almost always returned a false negative result for an entirely AI-generated product.
These assertions are a bit more subjective, to be sure — what’s “too many false positives,” for example? What’s “reliable?”
Even so, the evidence to support the “false positive” assertion is pretty well lacking. GPTZero, sure. But other, better detectors have proven quite good at telling the difference between real human writing and AI created text.
From the study we covered in Issue 250:
all 14 detectors [tested in the research study] correctly identified human-written text as human-written text with 96% accuracy. Ten of the 14 systems were a perfect nine for nine. Only Complatio (8), Winston AI (7), GPTZero (6), and PlagiarismCheck (8) recorded incorrect results on human writing.
Again, GPTZero.
As for evasion efforts fooling detection systems, as we’ve also covered, that’s not entirely true either. Most good detection systems can spot AI text even when users try to cover their tracks.
From the research we went over in Issue 253:
Turnitin correctly identified 91% of the AI-generated papers, despite the efforts to avoid it.
Keep in mind that the same study found that humans identified the same AI-generated papers with only 55% accuracy.
In other words, at best, the idea of “too many false positives” and it being “easy to fool the detectors,” is opinion. Sure, it’s an opinion piece. Though it’s presented as obvious and repeated all the time, it is — most generously — disputed.
Then There’s This
The Vice-Chancellor also writes:
If generative AI detectors will not detect computer-generated contributions to student work, or if they make us accuse students falsely, how will we know whether our students are learning at all?
This is where I start to go all Wile E. Coyote and launch into orbit.
As I have also written so many times, AI detectors do not make anyone do anything. Moreover, I cannot fathom something more dismissive of teachers and teaching than simply assuming that an alert from detection technology will instantly rob them of all context, experience, expertise, and agency. “The technology makes us do it,” is insulting, dystopian fantasy.
And it appears I have to write this again — if a teacher initiates a misconduct case based only on detection of AI-similar text, they are doing it wrong. These systems are designed to raise awareness and focus conversations and inquiries. They do not, cannot, and should not, make anyone do anything.
I feel like this is very basic stuff that the good Vice-Chancellor is not getting.
However, I also think she was — this close — to getting it when she wrote that the perceived inability to “detect computer-generated contributions to student work” make it a challenge to “know whether our students are learning at all.”
This. Close.
Then There’s This, Again.
Westermann also writes:
Students acknowledge some ethical qualms about cheating on exams and assignments, but they say that if universities worry about plagiarism, it is their problem to solve.
And here, I am confounded as to where to begin.
I guess I am relieved that students show “some” ethical qualms about “cheating on exams and assignments.” Although I am sure this should have been the entire story. It seems important.
And I agree that students generally put the responsibility for stopping cheating on the professor or university. We’ve seen this in research (see Issue 66). Moreover, students believe that lack of effort to stop or police cheating is functional permission to cheat. So, here at least, the author is right — plagiarism and other cheating are the school’s problem to solve.
I also note the preceding clause, “if universities worry about plagiarism.” If.
And what school on the planet would say they don’t worry about plagiarism?
And, Here It Is
The question was not entirely rhetorical.
From our author:
The question we need to think about is whether plagiarism or cheating are even useful categories of pedagogic concern
What’s that now?
You may need to read that again. I had to.
She is literally saying that academics and institutions of learning need to consider whether plagiarism — and cheating! — are useful categories of concern when it comes to teaching and learning.
It’s a very good thing I am typing because I am literally speechless.
Cheating may not be a pedagogic concern? Are we actually serious here?
Someone tell me please how a university can honestly issue a credential of any kind, of any value whatsoever, if it cannot ascertain whether the credential holder actually did the work to earn it — if simply verifying the represented learning is not a useful concern related to the teaching.
We are talking about cheating here — the fraudulent representation of your personal knowledge, skills, and competencies. How is that not a concern of teaching and learning? How is this even a question?
I keep asking because, in all honesty, I just don’t know what else to do.
The concern the Vice-Chancellor expressed earlier about, “how will we know whether our students are learning at all?” — must have been a neurological misfire. I fail to fathom how you can write these two things in the same paper:
how will we know whether our students are learning at all?
And:
The question we need to think about is whether plagiarism or cheating are even useful categories of pedagogic concern
If we accept that those who cheat are not learning, the answer is obvious. If you view cheating as a useless concern, you cannot know whether students are learning.
And by the way, what could possibly beat that for Academic Integrity Quote of the Year?
I’m not ready to think about it. I give up. I’m going to be sick.
An Example of Why Bad Research Matters
Maybe it’s obvious that bad research is bad, in a self-referential way. Or that its pollution and clouding of honest discussion and debate is equally obvious. But what I hate most about bad research is that lazy, ethically-absent publications blindly repeat it.
In Issue 242, I touched on “research” from a professor at Drexel University that was based on — I kid you not at all — 49 posts on Reddit. And how Fast Co. fell for the ruse, repeating the utterly ungrounded and unverified rants as though they were legitimate and newsworthy.
Now, a brown water website in India has blindly echoed Fast Co and its bedrock insanity of basing anything whatsoever on Reddit posts. Their headline:
Study examines impact of being wrongly accused of cheating by using ChatGPT
To restate the obvious, there is no credible or independent evidence whatsoever that the accusations even happened, let alone that they were wrong.
Honestly, I really blame the editors of the journals that publish these papers in the first place. Then, the editors of the publications that write about them. I mean, in all seriousness, what are we doing? I really do want someone to stand up on stage, in public, and defend the productive informative value of what 49 people had to say on Reddit.
But here we are.
Again.
Student Editorial: Do Better For Yourself
A student editorial in the student paper at the University of Colorado, Colorado Springs makes a few good arguments about why cheating is not worthwhile.
Not much of it is new. But still, I feel as though student voices are important and research shows that peers can significantly influence academic conduct (see Issue 253). So, there’s nothing but good about this.
The student writes:
using AI for assignments or just plagiarizing. Not only is it unethical — it’s a complete waste of time and money.
And:
If you are cheating because you don’t care about your grades, you or your parents or your scholarships are throwing money away. Do better for yourself.
I wonder what would happen if every student paper at every school published just one of these a year.
Department of Corrections Department
When I shared the academic misconduct case numbers from the University of California, Davis in the last Issue, I said the source was the student newspaper. However, the source was actually the website of the school’s alumni relations entity. My mistake.
Another clarifying note, though not technically a correction. In Issue 255, I shared that Chegg had won a legal case allowing it to take over the website of a company that was giving away Chegg’s answers to test and homework questions — the stuff Chegg charges for. After the story, readers sent in the actual court decision and it allows Chegg to access and run the website for 30 days, not forever. After that, I am not sure.