(373) AI Cheating is Like a Disease
Plus, a letter in The Guardian. Plus, AI detection company Pangram raises $4 million.
Issue 373
Subscribe below to join 4,754 (+15) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
College Professors Don't Know How to Catch Students Cheating with AI
That’s the headline on this article in Mashable.
Strong stuff. But the subheadline is perhaps even stronger:
"It felt like a disease, where you see a couple cases and then all of a sudden there's an outbreak. That's what it felt like."
The article starts:
Leo Goldsmith, an assistant professor of screen studies at the New School, can tell when you use AI to cheat on an assignment. There's just no good way for him to prove it.
A few fast flags as we look at this coverage. Proof is too high a standard. That’s a higher bar than American courtrooms, for which “beyond reasonable doubt” is the accepted standard in criminal cases. In academic integrity situations, most schools use a “preponderance of the evidence” standard, meaning more than 50/50, more likely than not.
In other words, the professor does not need “to prove it.” He needs to present good evidence. If he thinks he can tell when a student uses AI to cheat, that alone may be good enough. Personally, I’d prefer more than that. But my point is that if we’re accepting the bar as “proof,” we’re in trouble.
Further, on this:
"I know a lot of examples where educators, and I've had this experience too, where they receive an assignment from a student, they're like, 'This is gotta be AI,' and then they don't have" any simple way of proving that, Goldsmith told me. "This is true with all kinds of cheating: The process itself is quite a lot of work, and if the goal of that process is to get an undergraduate, for example, kicked out of school, very few people want to do this."
Proof, again. If a school is requiring “proof,” the school is doing an injustice to educators, honest students, alumni, the businesses that hire its graduates, and to the general public.
The rest of this sentence is right on, however.
The emotional and actual labor of academic integrity inquirers is massive and a significant disincentive. Because most schools don’t want to deal with — or even know about — academic integrity issues, I am not sure that a process that is complicated and labor-heavy is an accident. Requiring “proof” is just one way schools make the process complex and burdensome, to the point where, as the professor said, people just don’t want to bother.
Further, this part is important:
This is the underlying hum AI has created in academia: my students are using AI to cheat, and there's not much I can do about it. When I asked one professor, who asked to be anonymous, how he catches students using AI to cheat, he said, "I don't. I'm not a cop." Another replied that it's the students' choice if they want to learn in class or not.
I’ve written about this plenty. Educators have to, in my opinion, exert pressure on their own. No serious effort to address academic misconduct and credential fraud can be successful when teachers believe the task is not their responsibility.
Specifically, let me add because I am annoyed, if a student has chosen not to learn in class, they should not be in class. They are wasting everyone’s time and talent. Kick them out. In all seriousness, if a student can choose not to learn, teachers and schools can choose not to try to teach them. I may even argue they are obligated not to.
I am off topic.
This next section is long, which I hate doing. I ask that you please click the link above to give Mashable and the writer a click. Please. It’s two seconds of attention payment. Here it is:
Patty Machelor, a journalism and writing professor at the University of Arizona, didn't expect her students to use AI to cheat on assignments. She teaches advanced reporting and writing classes in the honors college — courses intended for students who are interested in developing their writing skills. So when a student turned in a piece clearly written by AI, she didn't realize it right away; she just knew it wasn't the student's work.
"I looked at it and I thought, oh my gosh, is this plagiarism?" she told Mashable.
The work clearly wasn't written by the student, whose work she had gotten to know well. And it didn't follow the journalistic guidelines of the course, either; instead, it sounded more like a research paper. Then, she read it out loud to her husband.
"And my husband immediately said, 'That's artificial intelligence,'" she said. "I was like, 'Of course.'"
So, she told the student to try again. She gave them an extension. And then the second draft came in, still littered with AI. The student even left in some of the prompts.
"[AI] was not on my radar," Machelor said, especially for the types of advanced writing courses she teaches. Though this was a first in her experience, it rocked her. "The students who use that tool are using it for a few reasons," she guessed. "One is, I think they're just overwhelmed. Two is it's become familiar. And three is they haven't gotten on fire about their lives and their own minds and their own creativity. If you want to be a journalist, this is the heart and soul of it."
Some students are overwhelmed. Some are just taking the easy way because they don’t care, and assume the teacher does not either. Sadly, they are too often right.
Continuing:
Irene McKisson, an adjunct professor at the University of Arizona, teaches one online class about social media and another in-person class about editing. Because of the nature of the in-person course, she hasn't had a significant issue with AI use — but her online course is rampant with it.
The jury has been in on this for a long time now — online classes are exceptionally prone to cheating. They just are.
And what, according to Mashable, does Professor McKisson do about this challenge? She tells her students:
"First of all, you signed up for the class," McKisson said. "Second of all, you're paying for the class. And third of all, this is stuff that you're actually going to need to know to be able to do a job. If you're just outsourcing the work, what is the value to you?"
Absolutely, completely useless. Seriously.
There’s also this:
While AI detectors exist, they are unreliable, leaving professors with few tools to definitively identify AI-generated writing.
The technology is new, which means the detectors are new, too, and we don't have much research available on their efficacy. That said, one paper in the International Journal for Educational Integrity shows that "the tools exhibited inconsistencies, producing false positives and uncertain classifications." And, as with most tech, the results change depending on so many variables. For instance, a study in Computation and Language noted in the University of Kansas' Center for Teaching Excellence shows that AI detectors are more likely to flag the work of non-native English speakers than the work of native speakers. The authors argued "against the use of GPT detectors in evaluative or educational settings, particularly when assessing the work of non-native English speakers."
That’s wrong. Nearly entirely. But it’s Mashable, so I don’t expect too much.
Moreover, I did not check, but I am pretty sure we covered these studies here as well. Either way, I can tell we have the common problem of bad apples standing in for the entire set. Any time you have “the tools,” that’s what I expect. I’ll check and do my best to let you know.
The story quotes someone from Truely, which is a company I do not know. The article says Truely is an AI-detection system for online or remote exams. Anyway:
Paul Vann, the cofounder of Truely, told Mashable that "resoundingly, people are worried" about AI and cheating. "People don't know how to deal with this type of thing because it's so new, it's built to be hidden, and frankly, it does do a good job at hiding itself." Truely, he claims, catches it.
This is in there too:
As AI gets better, detection may always be a step behind — the real answer might lie in rethinking how we produce assessments, not just the kind of surveillance we have to put on students.
Thank you, Mashable. No citation. No quote. Just “the real answer” may be “rethinking how we produce assessments.” And not surveillance. Like I said, it’s Mashable.
Speaking of, Mashable also serves up:
But because the use of generative AI in school is so new, it's also hard to know what counts as "cheating."
No, it’s not.
There’s this as well, from Professor McKisson:
"There was a whole discussion about rubrics, and I was like, 'Oh my gosh! That's it. That's the way to curb some of this, is to use the rubric to give people [who use AI] zeros,'" she said. "[Students are] going to keep doing it unless there's a negative consequence."
For cheating, a zero is not necessarily a negative consequence. But at least it’s better than telling them about how they will need to learn because of their jobs or because they paid for the course. Whatever — I don’t want to lose the point. She is right. Students will keep doing it unless there is a consequence.
Mashable has also decided that student pressure is why students use AI to cheat:
You have to maintain your GPA or you'll go on academic probation. There aren't enough hours in the week to both succeed and sleep, but generative AI could write your three essays, take that online test, and make flashcards for your multiple-choice final faster than you could make dinner. And you know your professors can't catch you because there's no simple way to prove ChatGPT wrote your essay.
For students facing academic and financial pressure, AI can seem more like a productivity tool than cheating. And, of course, everyone else is using it.
Professors can catch it. If they want to. The article itself quotes two professors who say they can spot it pretty easily.
Mashable goes on to trip over the tropes of this area — the key is to redesign your course, make your assessments AI-proof, don’t just punish students, the technology is here to stay, and so forth.
The article does have a few good bits in it, however. Seeing teachers recognize the threat and take steps to mitigate it is good. We need to see more of that.
Guardian Letter: Some Schools Don’t Care About AI Problems
The Guardian has published a letter that is worth a few moments of your time.
It’s in response to recent coverage in the paper about the surge in AI misuse by students — using AI to attempt academic fraud (see Issue 370). The writer is Dr. Craig Reeves, who teaches at Birkbeck, University of London. He writes:
I commend your reporting of the AI scandal in UK universities (Revealed: Thousands of UK university students caught cheating using AI, 15 June), but “tip of the iceberg” is an understatement. While freedom of information requests inform about the universities that are catching AI cheating, the universities that are not doing so are the real problem.
That’s what I, and others, said.
Reeves continues:
In 2023, a widely used assessment platform, Turnitin, released an AI indicator, reporting high reliability from huge-sample tests. However, many universities opted out of this indicator, without testing it. Noise about high “false positives” circulated, but independent research has debunked these concerns (Weber-Wulff et al 2023; Walters 2023; Perkins et al, 2024).
I do not know Dr. Reeves at all, but I think I have a crush.
Many universities have indeed turned off their AI warning systems, boggling the mind. He calls the false-positive discourse noise, which it is. He says research has debunked this, which is also true.
Although, in fairness, these studies have proven high reliability rates and low error rates for a few excellent systems. I left the links in, should you care to check. I think we’ve covered a few of these, maybe all three.
He goes on:
The real motivation may be that institutions relying on high-fee-paying international cohorts would rather not know; the motto is “see no cheating, hear no cheating, lose no revenue”. The political economy of higher education is driving a scandal of unreliable degree-awarding and the deskilling of graduates on a mass scale.
Absolute fire, as the kids used to say. I’m outright stealing the “see no cheating, hear no cheating, lose no revenue” thing. I do think this is a consideration within higher education.
His letter ends:
A sector sea change is under way, with some institutions publicly adopting proper exams (maligned as old-fashioned, rote-learning, unrealistic etc) that test what students can actually do themselves. Institutions that are resistant to ripping off the plaster of convenient yet compromised assessments will, I’ll wager, have to some day explain themselves to a public inquiry.
I don’t know if there will ever be a public inquiry regarding schools that know about massive academic fraud and do nothing. I hope so. But I prefer that they address the problem, which would be best for everyone.
Finally, I don’t know whether Dr. Reeves wrote the subheadline above his letter, or whether it was supplied by an editor. But it reads, in part:
some institutions don’t seem interested in dealing with the problem of AI use by students and are resisting in-person assessments
Can confirm. And expand. Some institutions do not seem interested in dealing with this problem, because they are not interested in dealing with this problem. Fixing it is hard, expensive, and painful. Embracing it, as so many schools are being urged to do, is easy, free, and trendy.
And, here we are.
Anyway, great letter. Important.
AI Detection Company Pangram Raises $4 Million in Funding
As reported by Reuters, AI detection company Pangram has raised about $4 million in early, seed funding.
Here is the first paragraph:
Pangram, a startup founded by former Tesla and Google employees, has raised about $4 million in seed funding to expand its tools that detect AI-generated text as schools and businesses grapple with the surging use of applications such as ChatGPT.
Pangram has consistently proven to be among the best available detection technologies (see Issue 357 or Issue 367).
The Reuters coverage also says:
Pangram, whose customers include question-and-answer website Quora and trustworthiness rating service NewsGuard, is betting its active learning algorithm will give it an edge over dominant industry players such as Turnitin.