337: Maybe We Should Be a Little Worried
A recommended podcast. Plus, Queens University Department issues new AI policy. Plus, class notes.
Issue 337
Subscribe below to join 4,283 (+5) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 46 (+1) outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month or $80 a year, and corporate or institutional subscriptions are $240 a year. Thank you!
Podcast: Maybe We Should be a Little Worried About AI + Cheating?
Justin Reich has a podcast out of the MIT teaching lab, and his most recent episode, released in December, was about AI and cheating. It’s about 35 minutes and well worth a listen.
Aside, Reich also has an absolutely fantastic book about education technology which I highly recommend.
In this podcast, Reich speaks with Jesse Dukes, who reported and interviewed on the featured topic. And I have to apologize — to my ears, Reich and Dukes sound similar and so it was a challenge to be sure who was speaking at any given time. So below, I may just attribute a quote to them or to the speakers broadly.
A second aside, if you run or produce a podcast, please provide a transcript somewhere. It saves by-hand dictation and may help in pinpointing the speakers. But that’s my problem.
The pair spend a good bit of time discussing the study from scholars at Stanford University which claimed to show that cheating with AI has not increased cheating overall. Like me, they have issues with it, albeit different issues (see Issue 261). They also focus on how the study was reported and how that coverage has caused people to misread it or misapply it. The press about the study was, as I pointed out at the time, really bad (also see Issue 261).
If you don’t remember the study, it showed that, based on surveys in high schools, overall rates of cheating did not change when ChatGPT became available. Admitted rates of cheating in high school were 70% before generative AI, and they were reported to be 70% after it. The research team used the static data to dismiss “panic” over AI cheating, and The New York Times grotesquely amplified it.
The podcast sandwiches discussion of this Stanford study around audio clips from two students. For ease of sharing, I’ll do the student clips first.
Samantha, a high school student in California:
I’m not going to out the teacher, but there was a certain comment I heard about someone’s class where they were like, ‘oh, that guy’s so easy, you just use ChatGPT for all his stuff’ and I was like ehhh – that is a bad look, I think
Samantha is right - that is a bad look. And when students know, the game is over.
Asked about how good teachers are at spotting ChatGPT, Samantha is skeptical, adding:
I’ve heard a lot of students talk about using it, and no one say that they’ve gotten caught
Woody is an eighth grade student in the northeast who one of the hosts says is sitting in the back of class, watching students cheat on in-class assignments. Woody, says the host, is watching the pace of the class accelerate because the teacher is getting a “mirage” that suggests students are learning faster than they are. Woody said he’s struggling to keep up and does not want to use ChatGPT, but — well, you know.
This is the first time I’ve seen this idea that chronic use of ChatGPT is speeding up the class and I am intrigued by it. If true, add it to the list of problems.
The interviewer, I think that’s Dukes, asked Woody how many students in a class of 20 are using ChatGPT to cheat. Woody says:
OK, so I’d say there’s ten people in that class using it for everything, like cheating on, the whole paper is AI. I’d say there’s another five that probably half of it’s written by AI, but they do actually read it through and go ‘gee, maybe I don’t wanna include the part that says “as a large language model,”’ but they like read it through and copy parts and splice bits and do whatever, then I’d say you’ve got five remaining, I’d say probably four of that five do the paper legitimately. Then there’s another one that’s going and – I don’t know, it’s kind of a mix. They plagiarize stuff, but it’s like a paragraph, not an entire thing.
Eighth grade. But don’t panic.
Returning to the Stanford study referenced previously, one of the hosts says:
The argument was that AI did not make that numbers go up. What I have noticed is that there are school leaders, there’s people you’ll see in professional development conferences, presenters, experts, who use this study to say ‘we really don’t need to worry that much about AI and cheating.’ That said, as I talk to teachers and I talk to students, I am starting to see data points that suggest there may be more cheating going on than school leaders realize, than teachers realize. It may be more of a problem than is widely acknowledged.
So, yes.
That study is absolutely used — has been used — to shrug off cheating or to nudge concern away from AI-facilitated academic fraud. And, yes — this study does not accurately reflect what is happening. Imagine me raising my voice — as I said in December of 2023.
This is why I harp so much on bad media coverage of academic integrity — because people do not read the actual research or ask skeptical questions. Not even the reporters, even though it’s their job. In this case, The New York Times writes that “Cheating Fears Over Chatbots Were Overblown, New Research Suggests.” And even though that is wrong, it seeps into the collective wisdom of, quoting the podcast, “school leaders, there’s people you’ll see in professional development conferences presenters, experts” as fact.
On the study itself, the hosts do describe it as “a good study” and “pretty rigorous,” but then go on to pretty much eviscerate it. They say:
A good early-stage study, no one should consider it a rigorous examination giving us representative data about what was happening in 2023 in schools across the country – they had surveys from three schools – one private school, one public school, one charter school. There are 130,000 schools in the country.
I did not see the part about three schools. Honestly, I do not think that is right. But nonetheless, it’s clear the pair are not fans of the findings.
The most common reporting of this study was that concerns about AI were overblown. But even if you take it as completely representative, I think there findings within the study that do indicate there are reasons to be pretty seriously concerned about pretty high percentages of students – higher than in past years – that admit to cheating behaviors that are not in the gray area, not in the margins, but are actually like really serious concerns for the learning process.
Again, the reporting of the study has made it real. And again, there are reasons to be “pretty seriously concerned” about what is actually happening.
They make this point again:
I do think sometimes school leaders, department leaders, technologists do point to this study to say, ‘well, you know, cheating is probably not that much of a problem and we don’t need to think about it’ and I think they might need to look at the study a little bit more closely and pay a little bit more attention to the possibility that there is a problem here before we can make that claim.”
And:
The framing of the study itself pretty much aligns with the ‘nothing to see here’ reporting but my sense is that if you look closely at some of the answers that they got, this study says that there is something to look at here, some things that are pretty concerning going on
The pair point out that in the research paper, the share of students who reported outsourcing their work entirely before ChatGPT was at about 5%. After GPT, it was 20%.
The pair do not make my core point about the Stanford data — that a consistent finding of 70% of high school students admitting to misconduct is actually 100%, due to what we know about the under-disclosure of cheating behavior. Two different papers credibly put actual cheating rates at 2.5x and 3x the rates of self-survey data. Remember, the Stanford paper said self-reported cheating was 70%. If that is actually 100%, as we may be safe to assume, the rate of cheating cannot go up, even though it can get worse.
It’s a bit surprising that the pair of hosts, informed as they are, miss the reality of self-survey data.
Even so, they are not wrong about that underlying study. It missed the mark, badly. And the bad reporting has made it worse — to the point where people actually think AI cheating is not a big problem. As the hosts suggest, those who think so:
might need to look at the study a little bit more closely and pay a little bit more attention to the possibility that there is a problem here before we can make that claim
Too late.
But, yeah. Maybe we should be a little worried.
Welcome to The Cheat Sheet. Have we met?
New AI Policy for Political Studies Department at Queens University
According to coverage at Queens University (CA), the political studies department has developed and shared a new policy on AI use.
The coverage describes the policy as a crack down, but that’s probably a bit far. From the coverage:
their new policy on the use of Artificial Intelligence (AI), which elaborates on existing guidelines set by the University on AI and academic integrity. The policy doesn’t differ from the University wide policy used by many other departments, but rather elaborates on it.
In more detail:
The policy outlines seven key points. The first three clarify it builds on a University-wide policy, specifying students may not use AI or delegate tasks unless explicitly permitted. The fourth point warns that students suspected of cheating will face a formal investigation by the course instructor or supervisor, followed by a report filed by the Faculty of Arts and Science if the investigation determines a breach from academic integrity.
The fifth point notes Large Language Models (LLMs) like ChatGPT can be unreliable for tasks and should not be used for research. The last point urges students to read the policy in full, emphasizing that ignorance will not be accepted as a defense for using AI tools.
Good for Queens Department of Political Studies. At worst, this will do no harm. Getting students talking about AI use and academic integrity will help curb misconduct, as will being clear about sanctions.
Professor Larin drafted the policy and, from the coverage:
“There has been a dramatic increase in academic integrity violations over the past two years, not just in our own Department or University, but at all higher education institutions around the world,” Larin explained.
In his own experience, Larin said he caught over 22 per cent of his class using AI on one or more assignments in the last year with the second-year courses he instructs, causing him to work extra hours to address the departures from academic integrity.
“The majority of the students in the class didn’t cheat, but the scale of the problem and the disrespect and foolishness that it demonstrated genuinely disgusted me,” Larin said.
Not much to add.
You’ve read this a thousand times — dramatic increase, nearly one in four students caught using AI, more work for faculty, the scale of the problem and the disrespect are disgusting.
Welcome to The Cheat Sheet. Have we met?
Class Notes
We earned a new paid subscriber this week, a supporter from Australia. Thank you.
A reminder that I’ll be speaking at the annual conference of the ICAI, the International Center for Academic Integrity, in March, in Chicago. If you have not registered and made plans to attend, there is still time. But not much. Details are here. Here also, by request, is the link to ICAI award nominations.
Was very struck by Woody's comment about "I don't want to use AI, but...". My 21 year old is an engineering student at a prestigious Ontario university. One of this finals last term was announced as being online, as the instructor would not be in the country when the exam was scheduled. When my son asked if the exam would be proctored, he was first brushed off, and eventually the instructor announced that the exam would not be proctored. According to my son, that virtually guaranteed that the majority of the class would be using an LLM to do their exam. He is frustrated because he's working to be at the top of his class. He would prefer not to use AI, but....