ASU and OpenAI
Plus, US News on college cheating. Plus, The University of Regina investigates cheating among nursing students.
Issue 270
To join the 3,744 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 32 outstanding citizens who are now paid subscribers. Thank you!
Arizona State University Announces it Will “Collaborate” with OpenAI/ChatGPT
You likely saw the news this past week that Arizona State University announced a partnership with OpenAI, the maker of ChatGPT, the best-known generative AI text bot.
Subsequent reports have clarified that the deal does not provide students with access to ChatGPT, only professors and research professionals. That reporting also says input from those at ASU will be kept confidential and not used in the so-called training of the AI.
Even though the deal does not appear to extend to access for students, it will have significant academic integrity implications.
To start, if you know anything at all about misconduct, you know there’s an ocean of difference between using a tool and using a tool that’s endorsed by your school. Like most schools, ASU has left questions of appropriate AI use to individual educators and departments. But by collaborating with OpenAI, the lines of approved and appropriate use are more smudgy than they were last week.
Further, as is always the case when educators flirt with or endorse misconduct enablers, I feel for the ASU folks who may have to deal with AI-related integrity cases. Explaining how using ChatGPT was bad while the school itself uses and praises it is — let’s say it’s tricky.
I say they may have to deal with those cases because the OpenAI announcement got me to pull up the ASU integrity policy and it’s a gigantic disaster, making it clear that ASU is very, very unlikely to pursue, or even know about, misconduct with generative AI.
It says:
The use of Generative AI/ChatGPT falls within ASU's Academic Integrity policies and processes.
Cool. That’s good. But it also says:
The accuracy of AI detection tools is not reliable; any results from these tools should be used for nothing more than a starting point for a conversation between faculty and the student whose work is in question. Suspected use of Generative AI in coursework is not sufficient evidence to begin a formal Academic Integrity investigation. Instead, we recommend faculty document their expectations early and often, and have open dialogs with students about the implications and responsible use of Generative AI in coursework and academia.
First, that’s just embarrassing. Not only are good AI detectors reliable, but they should always be used to start conversations.
But if you read closely, ASU goes further by adding that AI detection should be used for “nothing more” than conversation, which is jaw-dropping. Adding that “suspected use of Generative AI in coursework is not sufficient evidence to begin a formal Academic Integrity investigation” is literally telling students that they cannot possibly be formally sanctioned for outright cheating with AI.
In other words, ASU is saying, in writing, that if a student uses AI/ChatGPT to fraudulently submit academic work, there is zero chance they will be subject to formal inquiry because “suspected use” is not enough and the school blanket dismisses AI detection. All a professor can do at ASU, it seems, is talk to their students. That’s it.
And let me tell you how that conversation goes.
Teacher: Jordan, this writing does not feel like your work. It’s bland and cites a few things that aren’t true or accurate. And the AI detector says it’s 100% likely to be written by AI.
Jordan: Thanks for the note, professor. I did not use AI.
Conversation over.
Since “suspected use” is not enough, what then? I will tell you what then — absolutely nothing. If that’s the policy at ASU, which it is, why would a professor bother?
It’s rhetorical.
So, now we have ASU officially working with OpenAI and telling their students that they should cite AI when they use it. But that, if they don’t, well — your teacher may talk to you.
Openly not enforcing any institutional guidelines at all for misuse of AI when more than one-third of students admit to using it for school work (see Issue 264), is indefensible. If you close your eyes and take a slow breath, you can actually feel the downward pull of ASU academic standards.
I mean seriously, what’s the point?
But before I exhale and move on, I’ll point out that ASU’s policy on academic integrity and AI also says:
Students and faculty should also ensure any AI-generated citations are correct, as generative AI tools are notorious for listing nonsensical citations.
Meet ASU’s new collaboration partner. It’s “notorious for listing nonsensical citations.” Good stuff. Thumbs up.
US News Writes on College Cheating
US News ran a long and high-level article on cheating in college this week.
It’s decent and I’m always thankful for high visibility on the issue. I’ll only share a few bits from it, such as:
Cheating is a multibillion-dollar business, with some educational technology companies making money off students who use their products to break or bend academic integrity rules and others earning revenue from colleges trying to prevent academic dishonesty.
True. They mean Chegg. And Course Hero. And others. And anti-cheating companies do make money too. But choosing between the two models — one illicit and destructive and one centered on integrity — there’s really no choice to be made. Still, good for US News for saying cheating is big business. It is.
Also, a retired professor described ChatGPT and Google Bard as:
the future of cheating
And a current professor, Rebecca Hamlin, at the University of Massachusetts, Amherst is sourced as saying she:
has seen cases of students caught cheating with ChatGPT. She caught 12 in her own classes during the spring 2023 semester.
And I love this quote:
Most instructors underestimate just how rampant the issue is, says Eric Anderman, a professor at The Ohio State University and interim dean at its Mansfield campus. "We think we're underestimating it because people don't want to admit to it."
Love.
The last bit I’ll share is this, which I hate:
Regardless of the cheating method, students are only harming themselves and their learning process, experts say.
Experts do say that. But I hate it because it’s depressingly incomplete. Academic cheaters hurt their teachers. They hurt the schools they attend. They hurt alumni. They hurt the general public. And their future employers. And they really hurt honest students.
Anyway, the piece is good and worth your time.
University of Regina Investing Misconduct by Nursing Students
According to local reporting, The University of Regina (CAN) is looking at about 50 cases of academic misconduct by nursing students.
Nurses.
When I want to get people to pay attention and understand cheating, I often say that the three subjects with the most frequent cheating are business, engineering, and nursing.
Maybe it’s me, but I’m terrified.
The misconduct cases arose from exams taken this past December and a school spokesperson said:
We can confirm that some of the investigations have determined that academic misconduct did occur, while others resulted in a determination that there was inadequate evidence to support a finding of misconduct
And we all know that “inadequate evidence” does not mean not cheating.
In any case, good for The University of Regina for sharing their information. I have far more confidence in a school that catches cheating than one that pretends it does not happen.