NPR Says College Cheating Reports "Soar"
Plus, University of Texas and its "not sufficient" proctoring. Plus, even more that Inside Higher Ed got wrong.
Issue 50
NPR Reports on Spike in College Cheating
National Public Radio has joined the ranks of other national outlets such as the Wall St. Journal, CBS News, NBC News and (if I may) the Washington Post in reporting on what has become an obvious and significant increase in cheating cases at American colleges since the pandemic. NPR used the word “soar” in its headline.
Largely, the news coverage is repetitive - cheating cases are up, experts say it’s probably related to the increase in forced online and remote learning that started in early 2020. And though that correlation is strong, no one can say for sure what’s driving it.
Where the NPR story is particularly helpful is in speaking with actual teachers and adding new data to our view on the increase. Here’s what NPR reported:
At Virginia Commonwealth University, 1,077 cases of “academic misconduct” in 2020/2021 - more than three times the previous year.
At the University of Georgia, cases more than doubled; from 228 in the fall of 2019 to more than 600 last fall.
At The Ohio State University, reported incidents of cheating were up more than 50% over the year before.
St. Mary's College of California, misconduct reports doubled last fall over the previous year.
At Middle Tennessee State, reports of cheating jump[ed] by more than 79% from fall of 2019 to spring of 2021.
We knew about the Ohio State numbers (See Issue 19 and Issue 41), but the others are new. [Note: I keep a spreadsheet on the institutions that have released public cheating numbers. If you’d like it, please e-mail me.]
The reporting also includes incidents at UCLA, though without numbers. It also says that the University of Texas at Austin, “found that reports of academic misconduct cases actually declined during the pandemic.”
Like others before it, the NPR story is probably most notable for giving cheating a big public stage.
At the same time, the open to the story is pretty good too. It starts with an Assistant Professor at Columbia University who, NPR says, “tried everything” to stop cheating in her cognitive neuroscience class. She gives her students a week to complete open book tests. She has her students sign an honor code. NPR says:
But they're still cheating.
What’s remarkable is the professor’s reaction, where, according to NPR, she says the cheating is “awkward” and “painful” but that,
it's really hard to blame them for it.
Speaking of the University of Texas, Their Report Seems to Contradict Their Public Statements
In the NPR story out last week and referenced above, the University of Texas at Austin was noted as an outlier:
And at least one school, the University of Texas at Austin, found that reports of academic misconduct cases actually declined during the pandemic. Katie McGee, the executive director for student conduct and academic integrity there, explains that before the pandemic, UT-Austin had toughened its ability, through software, to detect cheating.
That’s deeply suspect. Not just because it would make UT a bright anomaly but also because, just a week before giving that NPR quote, the school released a report more or less encouraging faculty not to use some of those abilities:
[complaints have caused this] committee to recommend that faculty and instructors find alternatives to high-stakes testing proctored with AI-based tools
Even more interestingly, that committee report says that the proctoring tools resulted in just 27 referred cases of suspected cheating, only 13 of which were upheld. Given that, the committee reported:
To be clear, the small number of students whose cases of academic integrity were upheld does not mean that there were not more widespread instances of inappropriate behavior on quizzes and tests in the online environment last year—only that the methods we currently use to proctor online quizzes and exams are not sufficient to detect academic integrity violations.
Jump up a line and read that again. The school said, “the methods we currently use to proctor online quizzes and exams are not sufficient to detect academic integrity violations.”
So, here’s a school telling NPR that cheating is down because of their anti-cheating tools while the school’s “Academic Integrity Committee about Online Testing and Assessment” says that those tools are not good, not sufficient and should not be used.
Yes, remote proctoring of online assessments is just one anti-cheating tool. And AI-only proctoring is just one variety of remote proctoring. And aside from mixing up what companies do and do not do AI-only proctoring (see Issue 28), the committee is generally right that AI-only proctoring isn’t very good and schools would do well to use other, better proctoring methods.
One more quick point about that UT report. It also recommends that assessments be more frequent, lower stakes:
instructors are encouraged to consider a testing plan where there are many short tests throughout the semester and where each test has a relatively low impact on the final grade. Dropping the lowest quiz or exam grade is another way to further decrease test anxiety. Reducing the stakes for quizzes and tests does not guarantee ethical behavior on these assessments.
In dealing with the increase in cheating during the pandemic, lowering the stakes of assessments has been common advice. The problem is that plenty of pre-pandemic research supported the idea that lower-stakes assessments actually encourage cheating by lowering the penalty for being caught while also reducing the perceived value of the assessment. Both contribute to cheating temptation.
For example, in their book “Cheating in College,” noted researchers McCabe, Butterfield and Trevino, find that students who engage in cheating behavior routinely say that “the assignment counted as a very small portion of my overall grade” as justification for cheating. In fact, many students say that in order to make the case that what they’re doing isn’t actually cheating at all.
In other words, assessing more often and with lower consequence may not limiting cheating; it may actually encourage it.
Either way, given the report of the UT committee, it’s dubious that cheating reports were down due to strong detection software. More likely is that the school was, as the committee suggested, just not very good at catching it.
A Final Note or Two on that Inside Higher Ed Discussion on Academic Integrity
In the last “The Cheat Sheet,” I was pretty hard on that two-man online event on academic integrity held by Inside Higher Ed (IHE) a few weeks ago. To be fair, it was wrong quite often.
IHE didn’t get everything wrong though. They did correctly say that “online courses have greater potential for cheating.” And though they said nary a word about cheating provider Course Hero, which has been a paid sponsor of IHE, they do get credit for calling out Chegg by name. They even showed Chegg’s website in a presentation slide.
But then they immediately undid that, as IHE editor Scott Jaschik said,
[Chegg said] that there was no evidence presented - and Chegg is absolutely correct - there was no concrete evidence presented that the students were necessarily cheating.
Maybe it’s just me, but when you’re discussing academic integrity and actually say out loud that “Chegg is absolutely correct,” you lose me.
The last thing I’ll share about the IHE thing is about the editors’ assertion that punishing cheaters does not work. “It doesn’t work,” they said flatly.
As an aside, I am not sure how it’s possible - in the same conversation - to imply that cheating is not increasing but also say that schools that have a focus on enforcing consequences have seen cheating increase. Yet, that’s where IHE seemed to land.
Even so, on the core question as to whether cheating penalties work, the research shows the exact opposite of what IHE said - punishments absolutely work in that they deter cheating.
I could share at least a dozen research papers on this. Here’s just one - a quote from research by Patricia A. Goedl and Ganesh B. Malla from July, 2020:
Deterrence theory states that deterrence will inhibit misconduct when participants feel that they will be caught and penalties will be imposed for the misconduct (Gibbs, 1975). Deterrence has been demonstrated to have an inverse relationship to academic misconduct (Buckley, Wiese, & Harvey, 1998)
In the next “The Cheat Sheet” - First, can you believe this is Issue 50? Fifty. You can review or search past issues here:
And really in the next “The Cheat Sheet” - I promise to get to that research on the impacts of laws against cheating. Plus, more cheating. To share and subscribe, the links are here: