Are Rates of Misconduct Still Going Up? Probably.
Plus, PBS in Virginia interviews two school leaders on AI and cheating. It's a perfect snapshot of where American schools are, and are not, on these issues. Actually, that's all the same story.
Issue 249
To join the 3,660 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 14 amazing people who are chipping in a few bucks via Patreon. Or joining the 24 outstanding citizens who are now paid subscribers. Thank you!
PBS Interview: Misconduct Rates at Virginia Tech are Still Increasing
Blue Ridge PBS did a segment recently, a nice, televised, sit-down interview with two higher education leaders in Virginia: Dr. Kara Latopolski, Director for Academic Integrity at Virginia Tech University, and Dr. Laura McLary, Provost at Hollins University. The topic was the twin AIs — academic integrity and artificial intelligence.
First, big credit to the local PBS for hosting this and raising this issue.
The host, Bob Denton, starts by asking the interviewees whether the challenges of academic misconduct are still increasing, even post-Covid.
It’s an issue I’ve been trying to get a grip on, as I think most people expected the pace and frequency of incidents to go down once instruction and assessment returned to classroom normative. I was not so sure and, based on some data I have seen, there is zero indication that misconduct is in retreat. In fact, the contrary is more likely.
So, to me, it’s news that Latopolski of Virginia Tech said:
I would say that we’ve seen a noticeable increase in academic dishonesty at Virginia Tech post-Covid. This is really due to the increase of available resources for students related to things like ChatGPT and other forms of generative artificial intelligence, Chegg, as well as other different online repositories that students are becoming more and more aware of.
A noticeable increase. Wow.
I’m impressed by her candor and fearlessness in name-checking Chegg. As I say all the time, for a company that claims it’s not the world’s leading cheating provider, they sure are mentioned an awful always by academic integrity professionals.
But that’s not the main point. Latopolski isn’t the only person who’s said the misconduct trajectory is still upwards. I don’t think it’s universal, as it was during Covid, but I think it’s not universally in retreat either. I think Pandora’s box may be not only open but torn asunder.
The PBS interview is also interesting because, for all the candor that Latopolski shows, McLary is just the opposite. Sadly, it’s McLary’s obfuscation word salad that’s more common in American higher education, where school leaders not only can’t admit cheating is a problem, but they can’t even say the words. In McLary’s response to the same question about whether cheating was still going up after Covid, she says, in part:
I would not necessarily say that’s the case
Adding that it’s possible that:
students might feel pressured into ways of completing assignments that we might not consider to be optimal.
Might not consider to be optimal?
Based on that answer alone, I’d wager that Hollis University can’t say whether cheating is on the rise because they have no idea.
My hunch is buttressed a bit later when McLary says also:
Rather than thinking about how are we going to prevent cheating, it’s more along the lines of how are we going to create classroom environments where students don’t feel motivated to cheat.
That’s simply not credible to the point of being depressing.
For context, consider if an administrator for highway safety said something like, “rather than thinking about how are we going to prevent speeding, we’re looking at ways we can create an environment where people don’t feel motivated to speed.”
I’m not equating cheating with speeding; I think there are tangible victims of academic misconduct every single time. Instead, I’m highlighting how bizarre it is for a leader of any institution to say they are not really interested in preventing something that is against public policy and clearly bad. That their solution is to get people to not want to do bad things. I don’t want to lock the bank at night, I want to create a culture where people don’t feel motivated to steal.
Come on.
And don’t think I did not catch that she said that what they were doing was “rather than thinking about how are we going to prevent cheating.” Thinking about. And not to catch cheating, she said prevent it. Let me repeat — they do not want to think about how to prevent cheating. A Provost at a University actually said out loud, on TV, that something was preferable to even thinking about preventing cheating.
I’m whatever is two levels beyond dumbfounded. And based on my experience, this is a pretty accurate reflection of where the majority of US colleges and universities are — they simply don’t want to even think about students doing work, getting degrees in ways “we might not consider to be optimal.”
It should be parody. But it’s real.
Although I am sure if I asked Hollins University, they’d tell me how seriously they take academic integrity.
A final note, since the interview was about AI and integrity, the topic of detection came up, about which, Latopolski said in part:
With generative AI … I know that Turnitin has developed something where they’re able to check, you know. At this point, my opinion is that it’s not reliable. We are not a place yet where we can rely on that as a sole form of identifying things that have been generated by generative artificial intelligence.
You probably think I’m going to take exception to that, but not at all.
First, she clearly says it’s her opinion. And it is an opinion. And since she’s an active practitioner in the space, I respect it. Reasonable people can view the reliability of these detection systems differently. That’s fair.
Latopolski is right to say that we’re not ready to use them as “a sole form of identifying.” No doubt — they should never be used that way. No one suggests otherwise. At most, at best, they are useful corroborators or insightful alerts. To me, teacher suspicion and an AI detection indication are probably enough to trigger a student conversation or a re-do of the assessment under different conditions. Or perhaps even initiate a formal inquiry. But the first bit for sure, at a minimum.
Anyway, it’s a great interview segment and, in places, represents the deep and damaging disconnect among education leaders — one says cheating is a problem and talks about Chegg, while the other positions cheating as something they might possibly, maybe, consider less than perfect, though they’d prefer not to think about it.
Nevada Professor Answers Questions About Cheating and Integrity
Professor Robert Ives of the University of Nevada, Reno answered some questions about academic integrity in the student paper recently.
I love that the student newspaper did this and I am happy to say that Professor Ives is mostly right about everything, which could easily have not been the case. The professor says, for example:
We know that cheating (including plagiarism) happens more often than we would like and more often than most of us would guess. On anonymous surveys, more than half of university students in developed countries acknowledge some kind of cheating.
Yup.
We know that personal characteristics are weak predictors of which students are most likely to cheat. Studies looking at characteristics like age, major, whether or not students are on scholarships, sex, race and ethnicity, and year in school typically find weak relationships between these characteristics and rates of cheating or no relationships at all.
Mostly. We know that men tend to try to cheat more than women. But otherwise, true. And we do know or are starting to learn, that personality traits are highly related to cheating behaviors. So, at least that personal characteristic is a good predictor.
He continues:
We don’t know what works to reduce cheating. The quantity and quality of research on reducing cheating in higher education have been criticized. Most studies are quite small and are based on relatively weak research methods.
Fair point about the research, though personally, I don’t think it’s true that we don’t know what works to reduce cheating. We do.
But I love how Ives correctly lumps these related things together:
We don’t know how Artificial Intelligence (AI) will affect cheating in higher education. For example, ChatGPT was just released in November of 2022. Students have already been using, and have been encouraged to use, other digital writing tools, such as Grammarly, Google Translate, and others, for their academic work. There is no consensus about when the use of these digital tools is considered inappropriate, and most universities have not yet developed clear guidelines about the use of these tools.
This is important because I suspect many educators have not absorbed that tools such as Grammarly, Google Translate and Quillbot are AI-powered text changers that change student writing. That is to say that they write for students. They, and ChatGPT, are different fruit in the same tree. And he’s right that most schools have not even yet considered policies for which of these tools, in what circumstances, should be disclosed or disallowed.
It’s a good piece and I wish more schools would do it.
Moody’s: Cheating with AI May Be “Credit Risk” for Colleges and Universities
Back in May, the credit rating agency Moody’s issued a report — a sector comment, actually — about the use of AI in higher education. The title is:
Artificial intelligence poses academic integrity risks, but provides opportunity for innovation and efficiencies
I did not link to it because you have to be a Moody’s customer to get it. It’s taken me since August to get a copy.
Anyway, in the very shallow document, Moody’s analysts say that AI can help colleges become more efficient and make better decisions. They say AI can:
present substantial opportunities for innovation and improvement in educational offerings
But more interesting to me is that they start by saying that AI products:
have the potential to compromise academic integrity at global colleges and universities, a credit risk.
They add:
Students will be able to use AI models to help with homework answers and draft academic or admissions essays, raising questions about cheating and plagiarism and resulting reputational damage. Awkward phrasing or incorrect grammar from existing chatbots may be easy to detect now, but determining which text is human- or AI-created will become more difficult as the technology improves in the months and years ahead, potentially increasing time and costs to monitor integrity.
True, though it assumes that schools are interested or invested in monitoring integrity in the first place. Many, as we know, are not — preferring to pay the price of “reputational damage” instead.
In any case, Moody’s has advised investors and education providers of the risks associated with AI-powered cheating. So there’s that.