Survey Finds 'Degree Apathy' a Major Driver in AI Use
Plus, Baltimore mom still fighting daughter's academic misconduct citation. Plus, sometimes, I just can't.
Issue 305
Subscribe below to join 3,882 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 18 amazing people who are chipping in a few bucks via Patreon. Or joining the 39 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month. Thank you!
Survey: Nearly a Third of College Students Willing to Use AI on Coursework, Disengagement a Major Driver
I have not read the paper itself yet; I am behind in my reading.
But the coverage of a recent survey of 160 undergraduate students by scholars at Swansea University says:
Thirty-two percent of respondents expressed a willingness to use AI tools such as ChatGPT for their assignments, and fifteen percent admitted they had already employed such tools in the past.
That feels pretty right. In Issue 264, I’d averaged a dozen or so surveys with this question and that average was 37%. Though I think most of those results were about having used AI, not about willingness. And I do think students are reluctant to tell their school they used AI on schoolwork, even in an anonymous survey. Keep in mind as well that surveys about illicit or disfavored activities always undercount actual rates of occurrence.
Anyway, a third and 15%.
More interesting probably is that the study found that student apathy toward their degree program was substantially linked to a willingness to use AI tools. From the coverage:
Lead author Dr. David Playfoot explained, "The influence of the five big personality traits is typically significant in behavioral studies. However, much to our surprise, participants' level of apathy towards their degree program overrode all of them. Our study revealed that students scoring higher on our degree apathy scale, indicating a lack of interest or engagement with their degree program, were more inclined to express a readiness to use AI tools for assignments.
"It doesn't matter if someone is generally conscientious, if they're disengaged from their degree program, they're still more likely to use AI tools for their assignments."
That makes sense to me. If a student does not care, then shortcuts make sense. If you don’t care about the journey, the fastest, easiest path is the best choice.
Also from the reporting:
The study also explored the impact of risk and consequences on the likelihood of cheating with ChatGPT. Results showed that students were less likely to cheat when the risk of detection was high or the punishment for cheating was severe. However, those with higher degree apathy were still more likely to engage in academic misconduct even under increased risk.
You don’t say.
It is genuinely strange that students are “less likely to cheat when the risk of detection was high or the punishment for cheating was severe.”
I hope you’re picking up on my sarcasm. Because I’m laying it on pretty thick.
The article quotes study co-author Dr. Andrew G. Thomas:
"It also points to an element of diminishing returns when it comes to deterrent, suggesting that scorched earth policies aren't necessary for discouraging students from misusing AI—though those deterrents may need to be stronger for those disengaged from their studies.
I’m not sure who — perhaps aside from occasionally me — is suggesting a scorched Earth policy. But it is true that no matter the risk of detection, no matter the magnitude of penalty, some people will still cheat. The impact of these policies does diminish at a certain point, because you’ll never get to zero. But that’s true of all integrity policies, including student engagement. Some super-engaged students are still going to cheat.
Still, I call your attention to the part that goes “deterrents may need to be stronger for those disengaged from their studies.” And since I’d wager that academic disengagement is ridiculously common, deterrents should probably just be stronger across the board.
Yes, investing in student engagement will help as well. But why can’t we do both? Let me rephrase — why don’t we do both?
When I get to the actual paper on the survey, I’ll let you know.
Baltimore Mom Still Fighting Daughter’s AI Cheating Determination
In Issue 294 we briefly reviewed a situation in Baltimore involving a high school student who was flagged for using AI on a writing assignment. She received a zero on the assessment and a notation for academic misconduct, which went on her academic record. The student’s mother has disputed the finding and appealed. So far, the school system has upheld the finding, the process, and the outcome.
Nonetheless, mom continues to argue her daughter’s innocence. This time, in a TV interview with the local FOX station. Aside, you really should watch the clip if only for the reporter’s over-the-top TV announcer voice. It’s absurd. No one talks like that.
But, as I said in Issue 294, saying you did not cheat does not mean you did not cheat.
In this most recent round of publicity, the TV station does us a favor in that it briefly shows parts of the letters the school district wrote to the parent during her appeals. In them, we get a little more info about what went down.
To start, and as included previously, the school and teacher used GPTZero to check the assignment, which no school should do. Ever. It’s terrible.
And although the district’s letters do not say so explicitly, they do imply that the teacher suspected something was off with the assignment before using an AI detector. Letters from the school say that the teacher felt the work was inconsistent in syntax, style, and sentence structure with the student’s previous work. Being suspicious, the teacher used GPTZero to check it, where the reporting says the system returned a 90% likely to be AI generated.
As I have also said before, that should be the process we want. Teacher first, technology second. When both think there is a problem, there probably is a problem.
A letter from the district also says:
Revision history shows 12 [unreadable] & paste corresponding to suspicious passages and those identified as AI
It’s not clear what software recorded the revision history, but still — pretty damning. It’s little wonder the school district is holding its ground.
The school did offer to remove the citation of misconduct from the student’s record, provided she had no further integrity incidents. But in the interview, the mom says, “It just doesn’t seem fair.”
Yes, fair. Mom is complaining about fairness.
Maybe the coverage of this case and rumors in school halls will chill cheating a little — showing that it’s possible to be caught and sanctioned. And that the school system will hold fast. Maybe. But it’s as likely that this case will be cited as more evidence (using full air quotes in that one) that AI detectors are punishing the innocent.
Sometimes, I Just Can’t
A friend of The Cheat Sheet sent this image along, which is, in my view, classic:
In case you cannot see it or read it, the image appears to be an e-mail from a student to their professor, turning in an assignment.
Problem is, the e-mail is a forward from an essay mill, “Unemployed Professors.” The essay mill tells the student:
We are glad to inform you that your plagiarism-free assignment is completed.
The attached pdf document is even titled, “Essay-Client1983.”
In the image, the professor asks what mark she should give the student. I’m sure it’s rhetorical but I will answer anyway. Were this somehow my decision, I’d expel the student — gone. It’s clear they do not respect the teacher, their classmates, the school, or the educational process. As such, they should not be there. It’s a waste of everyone’s time and money.
And please, someone tell me again how using a plagiarism detector or proctoring a remote exam violates the sanctity of trust between teachers and students. Go ahead.
Still, pretty funny.
Class Note:
I used ChatGPT to proofread this. I made four corrections as a result.