Duke Student: We Cheat So We Can Spend More Time at Camping Out for Basketball Games
Plus, oh QuillBot. Plus, a professor writes about the trade-offs that must be made to stop cheating.
Issue 277
To join the 3,768 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 17 amazing people who are chipping in a few bucks via Patreon. Or joining the 32 outstanding citizens who are now paid subscribers. Thank you!
Duke Student: Students Will Use ChatGPT to Cheat No Matter What, Because
Last month, a student at Duke University wrote an article in the student newspaper about cheating with ChatGPT.
It’s an important read and an insightful masterclass in entitled rationalization, which is an enormous driver of misconduct (see Issue 64). The study in Issue 64 calls the ability to rationalize cheating as “the critical piece” of moving from opportunity to action.
So, this piece is important as it centers on rationalization for cheating.
Before jumping in fully — a little housekeeping. Our student, who seems to be in the computer/STEM fields, erroneously says that:
Some schools have chosen to ban the use of large language models (LLMs) like ChatGPT completely
That comes from a misreporting, or not a close reading, of schools banning access to ChatGPT on their own servers and systems, which is different than banning the use of something. I am not aware of any schools that banned use of GPT as a policy.
Though that is not a big error, it’s where our writer pitches his argument tent, saying:
allow me to let you in on a little secret: Regardless of professor policies on AI use, some (arguably many) students are going to use ChatGPT to aid or fully complete their assignments.
I do not think this is a secret.
In Issue 260, we shared survey data showing that two-thirds of students (68%) said they would continue to use generative AI such as ChatGTP even if their teachers prohibited it. Given how self-reporting works, 68% is a floor, not a ceiling.
Students will continue to use ChatGPT despite prohibition because, and here is the important part, he continues:
This isn’t because students want to cheat, but rather because their classes aren’t serving their learning interests. Goal misalignment between students and Duke’s curriculum is causing the rampant use of ChatGPT on assignments.
And there it is.
Cheating is the school’s fault.
Students with twelve minutes of life experience and zero minutes of career experience know what their courses and colleges should be doing. And since they think they’re not getting whatever that is, they cheat.
Here’s another, slightly longer example:
I know way too many students with 4.0 GPAs in computer science that can’t create an API, much less call one. This is fundamental to the role of software engineers (the career outcome most computer science students strive for). Instead, students spend 40 hours in a single week designing a basic computer for one class — something less relevant to the role most strive for. So, it starts to make sense that students cheat, using LLMs to get work done faster. Our classes feel like a distraction from our goals.
Cheating “makes sense” because students think classes are not consistent with whatever they believe their goals are.
Here’s another blatant rationalization:
And using LLMs to cheat is actually preparing students for careers. A recent survey found 92% of professional software engineers use generative AI while they code. So this is actually preparing them for the tools they will use in the workforce.
Cheating is preparation. Got it.
It also never dawned on our wayward scholar, it seems, that, if AI can code for you, there’s no reason anyone should pay you to do it. At least not as a career.
Want another? You want another. I aim to please. Here is another:
Duke selects ambitious and exceptional undergraduates. Wouldn’t it be more concerning if students didn’t try to orient their time towards their goals?
It should concern Duke if their students do not cheat. Makes perfect sense.
These programs aren’t meeting our interests and goals, so we’re going to cheat. We have to cheat in our careers, so we cheat. You should want people who are ambitious. So, we cheat.
Maybe it’s me, but I can already hear him explaining all that to the judge at his sentencing for white-collar fraud. Fine, it’s me. But I do.
And let’s drill down a bit on those “interests” that our student says his courses aren’t meeting. He writes that students:
use ChatGPT to give themselves more time to focus on job preparation and socializing. In computer science, this means using AI to quickly complete assignments to enable working on Leetcode interview questions, side projects or tenting in KVille.
KVille, for the uninformed, is the practice of undergraduate students camping outside the basketball stadium so they can get into the games. So, students cheat with ChatGPT so they get more time for “socializing” and for sleeping in tents before basketball games.
And it’s the school that is letting him down — not the other way around.
I mean, with as little snark as possible, if you’re cheating so you can spend more time at basketball campsites and parties, why are you in college? You already know what you need in your career. You already know how to use ChatGPT to do the work you think will be required. You think your courses aren’t helping you. So, why are you paying tuition? Quit. Then you’ll have way more time for socializing and preparing for job interviews. Right? Isn’t that the goal?
Finally, our future Duke alum returns to the core excuses for academic fraud, blaming the school:
There are a million other ways to cheat. If classes continue to fail to provide the value students hope to derive from college, it won’t matter if AI is banned or not. Students will always find a way to cheat.
They will. They will also, almost always, find a way to rationalize it.
Please go read this. This kind of thinking is the dark heart of this dark problem.
This article by Rafael Perez, a professor at the University of Rochester, ran in the Orange County Register in December. But it’s really well juxtaposed with the story above.
The headline is:
Generative AI is a big problem in college campuses
I’ll take, statements of fact, for 200.
On generative AI, Perez writes:
Speaking to instructors, I hear many different strategies for combating this form of cheating – some want to fight it and others are choosing to embrace it.
I think it’s likely accidental parsing, but the idea that some instructors are choosing to embrace “this form of cheating” feels exceptionally right. Though, in fairness, I think a substantial portion of them aren’t embracing it as much as surrendering to it. I’ve written many times about professors who do not think it is part of their job to care about cheating.
Either way, that’s not the important part of the article.
The professor writes:
The first general strategy is to make it difficult to use AI in the first place. An instructor may try to develop course material in a way that would make it less likely that ChatGPT would be able to answer questions about it.
I’ve tried this method myself but it has severe limitations. ChatGPT is capable of answering specific questions about highly tailored material. For example, I asked ChatGPT to explain a problem that “Professor Perez” presented in class against a particular theory. It was able to accurately guess, partly because there are only so many problems that have been presented against a given view. In cases where the chatbot gets the answer wrong, it’s often still good enough to warrant partial credit.
This feels entirely reasonable. Frankly, I am exhausted by the glib solution offered by some that, to deal with generative AI, professors should simply change their courses or assessments. That approach is, at best, limited, and in some cases, downright impractical.
He continues:
Another strategy that has gained traction is incorporating AI into the learning and assessment experience. This can be done by having students critically engage with the answers ChatGPT gives to course questions or by having students use the chatbot to create an essay outline and then write the paper themselves. I assume that the thought here is that we’re instilling in students that it’s perfectly fine to use AI as a supplementary resource. The problem is that there is no reason for students to refrain from using this resource to just write the rest of the paper for them.
Also feels true. The line between “good for this” and “disallowed for that” feels difficult to distinguish and hella hard to enforce.
Our professor also says some teachers are asking students to write in class, on paper — where access to AI is highly limited. Though, he says, here too there are trade-offs.
And, further:
It’s actually not terribly difficult to discern when a student has used AI in an assignment. GPT simply doesn’t write like an undergrad and it will often use bizarre reasoning or just plain falsehoods. The problem is that unless the student confesses, it can be near impossible to prove that they cheated.
Some instructors have turned to using online AI detectors to catch instances of cheating. Unfortunately, these detectors often classify completely original work as AI generated and so they aren’t much help in proving that an assignment is not the student’s own.
Here, I agree and disagree. I imagine that, if you are paying attention and care, and get to know your students, it is easy to spot AI-created text. And to me, the informed assessment of an experienced, trained, and compensated professional ought to be enough. When a doctor tells me I have the flu, I don’t make her prove it.
Also, I think there is an array of options to address suspected cheating that do not require proof, starting with a conversation about the suspicion. An oral re-test or hand-written re-assessment seems appropriate and not entirely complicated — if class sizes are manageable enough.
Moreover, you must know I am going to disagree with the idea that AI detectors “often” misclassify original work as AI-generated. I don’t know for sure where this notion infected academic thinking, but it’s just untrue (see Issue 250) — at least as applied to good, purpose-built systems. No one should use the others.
Our professor also toys with the idea that we should not care about cheating because what someone learns in college is largely useless anyway. But, he writes:
college is more than just about gaining particular pieces of knowledge. That bachelor’s diploma also indicates to possible employers that you can follow directions and sit in one spot for hours to complete a task. A student that cheated their way through college is not likely to be in a good position to become a valuable long-term employee.
Sitting for hours to complete a task sounds awful, which I write, seated for hours completing this newsletter. But still, I wish more people understood that what you learn in college is only just a portion of what’s in the textbook or required reading. And I wish I could sky-write this sentence:
A student that cheated their way through college is not likely to be in a good position to become a valuable long-term employee.
Probably should be who. But I don’t care — amen.
Finally, the piece ends with this, which I wish I could also sky-write and I beg every single teacher to read:
Many of us teaching at the college level have a sense of optimism that if only we talk to our students about the importance of learning and enriching their lives, we can help them see that they are only hurting their future selves by cheating. But this might be overly optimistic. It would still be entirely in the hands of students whether they submit their own work.
We simply cannot allow this to be the case. Students will cheat if left to their own devices – even after a fiery speech about the value of knowledge. Therefore, to protect students from themselves, the onus is on the colleges and on the instructors to limit the opportunities that students have to cheat, even if it entails unfortunate trade-offs.
Yes, colleges and professors must invest in limiting cheating if they are to have any value whatsoever. Conversations and teacher/student relationships will only get you so far. And there is no arguing that such a place is not nearly far enough.
Some trade-offs are necessary. It’s way past time to start making them. The alternative, in a word — irrelevancy.
Oh, QuillBot. Why Are You How You Are?
In Issue 266, we shared the brazen and bizarre development that cheating engine QuillBot, which is owned by the cheating empire Course Hero, had started something called QuillBot University.
Now, QuillBot has topped even themselves at blithe self-parody.
This image, from QuillBot University, was sent in by a reader:
Yup, they misspelled university in their own name.
And now QuillBot/Couse Hero has gone and ruined a perfectly useful joke about the people who cannot even spell university. Thanks a lot, QuillBot.