Big News from Fast Company: 49 People on Reddit are Upset
Plus, another important integrity demo on October 18. Plus, most professors think generative AI makes cheating easier, 60% want help.
Issue 242
To join the 3,654 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 14 amazing people who are chipping in a few bucks via Patreon. Or joining the 23 outstanding citizens who are now paid subscribers. Thank you!
Fast Company, Others Report on Reddit Rants
Over the past several years, Fast Company has generally reported quite poorly on issues of academic integrity. They’ve given awards to cheating providers, butchered actual information-based journalism, and never missed an opportunity to elevate the questionable and sensational over the actual.
But this recent article probably takes the cake. The headline is:
Study: 49 students on the ‘harrowing’ threat of being accused of ChatGPT cheating
Harrowing threat.
Mind you, it does not say that the threat is of being improperly or inaccurately accused of misconduct - just accused.
They report that, as ChatGPT and related tools become more popular, cheating increases and so, in turn, do the number of misconduct accusations and cases. Makes sense. There’s nothing wrong with that. As cheating increases, you’d both want and expect the number of cheating citations or inquiries to escalate correspondingly.
But at Fast Co, that’s somehow an issue - not that cheating is up, but that the accusations are up. And that some people being accused of cheating are upset about it.
The big problem here is that Fast Company, and others, took to writing this based on a “study” of misconduct allegations from - wait for it - Reddit. Yes, what people typed on the anonymous, back-room chat-board of the Internet is news to Fast Company, courtesy of someone writing about it and some journal publishing it.
It would be a joke if the topic were not serious and the consequences real. But an assistant teaching professor at Drexel University wrote a paper on:
Data comprising 49 Reddit posts and discussions between December 2022 and June 2023 were collected. Students shared their experiences, often asserting false accusations, and discussed strategies to navigate these situations.
That’s from the public abstract because I could not rationalize paying the $50 for the entire paper.
But, again - freaking Reddit. We are sourcing research on misconduct from Reddit, courteously recycled and amplified by Fast Company. And those who popped off on Reddit were “often asserting false accusations,” which means exactly nothing. That people do not admit to cheating does not mean they were not cheating.
And as far as I can tell, the key finding of the paper is that people who are accused of cheating do not like it. I am not sure that’s news. Moreover, I am not sure what you expect to find on Reddit. People don’t go to Reddit to share how well things are going.
Two more points here from the Fast Company story about people being upset on Reddit.
One, Fast Co writes:
Given that professors essentially have to guess whether they think an assignment sounds like AI or not, the feedback—most notably, anger and frustration—seems called for.
First, that is simply not true. Professors do not “essentially have to guess” about these things. Even setting aside detection technologies, in the best of circumstances, educators know what their students know. They know when AI invests facts or citations or writes without conviction or authentic voice. Those are not guesses, they are informed decisions based on expertise and experience. That is the reason we pay teachers, because they know things. Saying that teachers have to guess about the performance of their students is ignorant and insulting.
Also, thanks Fast Company for saying that anger and frustration seem called for. Journalistic objectivity it is not.
And finally, the article quotes the author of the study, from Drexel. He says:
“Some discussions went into great detail about how students could collect evidence to prove that they had written an essay without AI, including tracking draft versions and using screen recording software. Others suggested running a detector on their own writing until it came back without being incorrectly flagged.”
Tracking drafts, screen recording - sure. But people suggesting - and our professor repeating - that students use AI detectors to alter their work until it’s not flagged is a big reveal. Let me just say that studiously avoiding the security cameras is not evidence of innocent intent. Personally, I think it implies precisely the opposite.
Integrity Technology Demo - October 18
Cursive, an integrity solution I know a little bit about, and think highly of, is hosting an open demo of their solutions on October 18.
You can sign up here. Or at:
https://cursivetechnologyinc-901.my.webex.com/weblink/register/r6699960664b5243a4bc0a5f14a24e597
Please check out what they’re doing, if you can.
Reminder: academic integrity and assessment platform Examind, is also hosting an online demo on October 18 (see Issue 241).
Phys.org: “Large Majority” of Professors Believe Generative AI Makes Cheating Easier
Phys.org has a story out recently, based on a survey from education publisher Wiley and other sources.
The gist is that, according to Wiley and Phys.org, a majority of professors say their students are already using generative AI in their classes. Sixty percent say they are “very familiar” with these AI tools.
The article also reports:
The large majority believe it will allow students to cheat more easily and make it harder to detect cheating. More than six in ten are looking for new ideas and solutions to address these concerns.
It’s a pattern we’ve seen before - the more educators know about generative AI, the more likely they are to worry about its misuse (see Issue 221).