More Research Shows that Cheating is More Common in Online Environments
Plus, I wrote on assessment value and misconduct. Plus, more than six million academic papers contained at least 80% AI-generated text.
Issue 287
Subscribe below to join 3,818 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 18 amazing people who are chipping in a few bucks via Patreon. Or joining the 37 outstanding citizens who are now paid subscribers. Thank you!
Research: Cheating More Common in Online Classes
By now, the finding that cheating is more common in online courses should be anything but news. But I am going to keep writing about it until we stop debating it —which means I will likely keep writing about it because some folks refuse to believe it.
So, here we are. Again. Another research paper has found that students cheat more when their courses, or programs, or assessments are online.
The paper is by Jia G. Liang, of Kansas State University, George R. Watson, of Marshall University, James Sottile, of Missouri State University, and Bonni A. Behrend, of Missouri State University.
The authors are noteworthy here because Watson and Sottile are the authors of a 2010 paper that is frequently misquoted to support the idea that cheating is not more common in online courses (See Issue 23). That is not what the duo found in 2010, and it’s not what they found this time either.
The paper is a survey of students and concludes:
In comparing cheating in live classes to online classes during the 2020-2021 academic year, our survey study results showed a stronger prevalence of cheating in online classes, and more than half (53.2%) of the students reported knowing a classmate who cheated during the pandemic.
The 53% mark is interesting because the “I know someone who did it” can be a more accurate indicator of misconduct than the more direct, “I did it” version of the question. That’s because people tend to not admit to illicit behavior, even anonymously.
But the part of the finding I wish to underscore is, “Our survey study results showed a stronger prevalence of cheating in online classes.”
From the report:
“Have you cheated more or less since the COVID-19 pandemic (February 2020) than before the pandemic?”, almost four times as many participants said they had cheated “More” than those responding “Less.”
And:
The survey responses revealed that, for almost every behavior listed, students in online courses scored higher for academic dishonesty, with 42.3% admitting that they have cheated on an assignment, quiz, or test, which is an almost 50% higher score than that of live courses (28.3%).
Almost 50% higher.
And:
the volume or number of times participants claimed to have engaged in these cheating behaviors tended to be higher for online classes, particularly when it relates to using electronic devices during quizzes and tests
Again, this is not an outlier finding. Many, many research inquiries have found similar results. From the report again:
The findings of this study confirm what the existing literature has indicated, that is, cheating in online courses is more prevalent than cheating in live courses
While you’re here, I want to share two other pieces of the new report. Though, as with the others, they are unlikely to be considered news. If you follow the concerns of academic integrity and credential fraud, you may know these things already.
One, cheating overall is increasing. The authors share (citations removed):
While the issue of academic dishonesty is a long- standing issue, the overall results of the current study show, not only the continuity in such a trend, but also indications of the growing severity of the problem. The study’s findings reveal that 28.7% of students cheated more since the pandemic started than prior to it, and the results speak to the compounding effects of more opportunities for students to cheat and to easily do so in online courses.
And that’s because, the research team argues, cheating is pretty easy and seldom caught:
As a result of increasing access to digital based tools that students have and are capable of mastering and the lag-behind institutional mechanism (i.e., faculty technology competency, online program/course structures, resources, and policies in preventing and addressing online cheating, the opportunities to cheat, without being caught, have grown substantially within the last few years.
More on that point:
Also, it is clear from the findings of this study that few students were caught cheating, which reduces student anxiety about being caught. Only 1.7% of students responded they have been caught cheating in a live course, even though 28.3% admit to cheating. The numbers are also concerning in online courses, with only 3 of the 186 (about 1.6%) students that admitted to cheating indicated that they got caught. This could suggest two things: there is a lack of consequences associated with engaging in academic cheating, and the prevention and discovery of cheating is challenging for faculty and institutions.
I think the data suggest both things — a lack of consequences and that prevention and discovery are challenging. Though, personally, I’m not sure much effort is being applied to the latter.
The authors suggest more immediate and certain consequences when misconduct is discovered as well as changing test questions, lock-down browsers, and proctored assessments.
But to beat this dead horse one more time, cheating is more common in online courses than in in-person varieties. There are, in my view, many reasons this is true. It means, also in my view, that institutions or educators with online classes have to take integrity threats more seriously than average, and spend more time and money addressing them. But, for many different reasons, that is rarely the case.
Me, On Lowering Assessment Stakes
The good folks at VICTVS recently invited me to contribute to their blog, an invite I was pleased to accept. Since they let me choose the topic, I picked assessment stakes — how the value of an assessment influences misconduct.
It’s a topic I’ve been following for some time, at least since the pandemic. From the outset, I was confused by what seemed to be a common suggestion that lowering the stakes of an assessment could, or would, mitigate cheating.
As outlined in the piece at VICTVS, I am not sure there’s clear research on the correlation between assessment value and misconduct. But what we do know about the dynamic does imply that the correlation is converse to conventional wisdom — that cheating is more common when an assessment is of little value, and less common when the stakes are high.
I won’t repeat the article. But if the topic interests you, I encourage you to jump over and give it a read.
Turnitin: More Than Six Million Papers Showed at Least 80% AI Text Last Year
This week, Turnitin released more data about the results of its AI detection system which can score text that was likely generated by AI. The release coincides with the first anniversary of the release of their detection technology.
The short version is that Turnitin says its AI scanners reviewed more than 200 million papers last year and that, of those, about 3% — more than six million papers — were likely at least 80% generated by AI.
Six Million.
As I say and write often, having covered academic integrity for many years now, I am not surprised in any way and yet completely shocked. I simply do not understand what anyone does not understand about students submitting more than six million papers with at least 80% AI-generated content.
Turnitin also reported that more than 22 million papers — about 11% of the total — had at least 20% of their content with AI-creation fingerprints.
I’m not going to get too twisted about 20%, though that’s not nothing. And a rate of more than one in every ten papers hitting that mark ought to be alarming to someone, somewhere.
But to me, the headline is six million. Just this year.
There are two additional things to note.
Turnitin is, as far as I know, the market leader but they do not represent the entire market. Some schools, programs, or individual professors may use different tools. And some schools unbelievably have chosen to be ignorant about how many of their students are using AI in their academic work. In other words, the six million number is not nearly all of them. The actual number is more than that, perhaps much more than that.
Six million papers at 80% or more does not mean six million incidents of misconduct. We do not know how many of these papers cited the use of AI or how many papers included using AI as part of the assignment. Many, we may presume, used AI writing completely within the bounds of what was permitted by their instructional leaders.
Still — and I cannot believe I am going to quote Wired Magazine here — the finding is:
Students Are Likely Writing Millions of Papers With AI
Ah, yes. They are. However, there is not much evidence so far that many people care.
Before I drop this, let me say good for Turnitin for sharing their data. We need more information about what’s going on, not less. I really hope more companies do the same.