UPenn Professor: I Think Everybody is Cheating
Plus, an opinion piece on ChatGPT and cheating from a Cornell student. Plus, two potential cops are dismissed for cheating. But wait. Plus, Chegg and SxSW Edu.
Issue 187
To join the 2,931 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below:
If you enjoy “The Cheat Sheet,” please consider joining the 11 amazing people who are chipping in a few bucks a month via Patreon. And to those who are, thank you!
University of Pennsylvania Professor: "I Think Everybody is Cheating ... I Mean, It's Happening.”
A few days ago, NPR ran a story on - what else? - cheating and ChatGPT.
The story is more or less an interview with Ethan Mollick, a professor at the Wharton School of business at the University of Pennsylvania, which you probably know is an Ivy League school. It plays off the news that ChatGPT is reported to have been able to pass exams at the venerable business school.
In the article, Mollick discusses the equilibrium that teachers will have to find between using AI as education tools and blocking their encroachment on academic integrity. Fine. Fair.
But it’s this quote that stands out not only for me, but for NPR - as they made it the headline:
"I think everybody is cheating ... I mean, it's happening. So what I'm asking students to do is just be honest with me," he said. "Tell me what they use ChatGPT for, tell me what they used as prompts to get it to do what they want, and that's all I'm asking from them. We're in a world where this is happening, but now it's just going to be at an even grander scale."
Mollick is right, of course. Everybody is cheating. It is happening. Though I do think we should dig a little bit into the logic of asking people actively engaged in fraud and deception to, “just be honest.”
Mollick says he’s not only allowing the use of ChatGPT in his classes, he’s requiring it. And I am no way saying that’s wrong.
Earlier in the story, he says:
"The truth is, I probably couldn't have stopped them even if I didn't require it,"
I’m not sure that’s true but I highlight both quotes to surface what I see as a sobering reality - that instructors at some of the best regarded institutions in the world have a defeatist attitude on cheating. They know it’s happening at massive scale and feel powerless to stop it.
It’s an attitude which I think I’d share if I was in a classroom today.
Teachers certainty aren’t getting much support on this issue from their schools. Half the time their colleagues use the perveance of misconduct as a backhanded way to criticize their teaching - just change your assessments, they say. If a computer can pass your test, you have a bad test, they say. Accreditors are nowhere to be seen on this issue. Chegg and Course Hero are collecting billions of dollars to aid cheating while critics blithely and blindly obsess over the tools that help preserve integrity. It’s more than enough to foster a sense of futility. I get it.
Anyway, an Ivy League professor is embracing ChatGPT because he thinks he can’t stop the cheating even if he tried. His solution is to ask for honesty. That really is where we are.
Right On Cue: A Student Editorial from Cornell
Right behind the piece from UPenn, I’m sharing this opinion article from a student at Cornell University.
There is a plenty to detangle though I recommend reading it.
The student opposes banning ChatGPT, as he says several Cornell professors already have. He opines:
I object to ChatGPT bans not because I love the technology but because there’s no good way to enforce them. Even the most zealous GPTZero-ers are hedging their bets on underdeveloped watermarking technology that most 9th graders could find a way around. ChatGPT prohibition punishes honest students and rewards unscrupulous ones — the same complaint an upstanding Cornell student during the spring 2020 semester would have made about take-home closed-note exams.
I’ve read that last sentence five or six times now and I still don’t get it. I don’t understand how a ban on ChatGPT punishes honest students and rewards cheaters.
Nonetheless, our student seems to hang his hat on AI-text watermarking, which does not exist yet and, as I’ve pointed out a few times, GPTZero - the detection software developed by a student at Princeton - simply is not very good (see Issue 180). But it is not now, and will not be, the only option.
It is true that determined cheaters can hack their way around GPT detection, as they can to some extent right now with other fraud-enabling moves. But it’s absolutely not true that “there’s no good way to enforce” a ban on ChatGPT - like I said, some of the detection tools already in the market are pretty darn good. And they will get better.
Still, the student is, in my view, on to a meaty point when he shares the following two items:
Prof. Lionel Levine, mathematics, foresaw this quagmire while preparing for his spring classes. Levine was trying to use ChatGPT to solve his class’s problem sets when he had a realization: By the time a student could give ChatGPT enough context to get full credit, she would have done more work than if she had simply solved the problem by hand.
And:
I found Levine’s sentiment to be true while trying to do my homework with ChatGPT this week (for research purposes, of course). I put an essay prompt into ChatGPT, and it spit out a cogent essay within seconds. I was amazed!
But as I read the essay over, I noticed the piece was redundant and contained several factual errors. By the time I reworked the paper to my liking, I hadn’t saved myself much time at all. If ChatGPT’s work merits a B+, perhaps we ought to have higher standards. Harder humanities classes could put ChatGPT out of business.
As I’ve also said before (see Issue 170), ChatGPT simply is not too good at academic work. It’s wrong often. It makes things up. Its writing is formulaic. A student who simply turns in what GPT writes, is highly likely to be caught. That is, if their professor invests the time to find it. Which means, of course, that it is entirely possible to enforce a ban on ChatGPT - all anyone has to do is do it.
Still, revising the AI, editing it, checking it, is probably a valuable way to engage academic material. Perhaps not the same as finding, analyzing, synthesizing and communicating source material personally - but nonetheless not useless.
It’s a good point.
Our anti-ban student continues:
I’m hopeful that the specter of students using ChatGPT to complete assignments can get rid of pointless homework. In my four years on the hill, particularly in larger classes, I’ve written dozens of thoughtless two-page reflections that were returned after a few days with an arbitrary grade and no other feedback. ChatGPT could be the cure for this and a litmus test for homework: if a robot can complete an assignment, it’s not worth assigning.
OK, two things about this that I hate. Like, actually hate.
One, it’s just otherworldly to me when students decide what assignments are “pointless.” As I have analogized often, I’ve never once had the urge to tell my doctor what tests and procedures are useless. Or to pull an airline pilot aside to tell her which safety checks and routes are less suited for my flight. I bought my ticket, I’m paying for the expertise of these professionals.
Two, here again we see the canard that “if a robot can complete an assignment, it’s not worth assigning.” Really? Did we not just say that this robot’s work “contained several factual errors”? That’s rhetorical. We did just say that. Clearly, the robot cannot do the assignment, ergo….
And just maybe the learning objective of assigning those “thoughtless two-page reflections” isn’t in what they say. Maybe it’s in the practice of having to do them - like going to gym. No one complains that running on a treadmill is pointless because you don’t go anywhere. Geographic change is not the objective. Training is. Habit is. Effort is.
Our author later acknowledges this possibility - that the learning is in the doing, not always within the four corners of the paper. Though he says:
To which I say: Nonsense.
His confidence is noted. His authority is not.
I’ve already gone on too long about this piece. But that’s not going to stop me from sharing two more points. And it really is an insightful, if misguided, read.
One, our Ivy scribe says that another reason he would not ban ChatGTP is that:
If professors give up on banning ChatGPT and instead decide to live with it, chatbots could keep teachers accountable in making sure a Cornell education is … “the genuine awakening of a human being.”
I’m not entirely sure that I get this point either. But the idea of ChatGPT holding teaching accountable is interesting. And noteworthy.
If teachers are, as our author says, “return[ing papers] after a few days with an arbitrary grade and no other feedback” - and I think we all know that some are - that’s not good. As such, if ChatGTP incentivizes some of them to read work more closely, give better feedback, more clearly structure and share their learning goals - that’s good.
Finally, our Big Red writer writes:
I still wouldn’t bother banning ChatGPT. If students can cheat without getting caught — and they can — then someone saying “please stop cheating” probably won’t be the thing to stop them.
Here, he’s right. Students can cheat without getting caught. And saying “please stop cheating” isn’t likely to deter them.
But he misses that stopping cheating is not the only reason professors or schools may have to ban ChatGTP and its ilk. Making clear that original academic work has real value seems like a fine one to me. Valuing the process of learning is good too. Making clear what conduct will and will not be accepted is another good reason.
That cheaters can still cheat does not obliterate the need to make guidelines clear. People steal and get away with it. That does not mean we should stop telling people not to do it, or just learn to live with it.
Two Nebraska Police Cadets Are Dismissed for Exam Cheating
Police cheating on exams is not a new story (see Issue 17).
Accordingly, it’s not huge news that two would-be officers in Lincoln, Nebraska were kicked out of their police academy for cheating on exams. At least it probably should not be.
The pair of recruits who were removed saw a copy of an exam before taking it. Five other cadets, the reporting says, observed at least part of the cheating but failed to report it. Those five were not discliplined.
First, I am glad that allegations of misconduct were made and investigated and that substantiated cases resulted in dismissal. That’s good. Of all the professions in which we should not want cheating, law enforcement should rank rather highly.
But before we feel too good about that, there is this - from the news report:
[the dismissed cadet] said he was grateful for his time in the academy and that he intends to pursue a career in law enforcement elsewhere in Nebraska.
So, that feels right. Even when we take cheating seriously, we don’t. Not really.
Chegg Sponsors SxSW Edu
You may remember that last August I asked readers of “The Cheat Sheet” to vote for and against some proposed panels for the upcoming SxSW Edu conference (see Issue 144).
Then in November (see Issue 167) I noted that SxSW had approved a panel featuring cheating provider Chegg, which was disappointing. Especially considering that none of the pro-integrity panels were approved. Disappointing, but not surprising.
Then, more recently, Chegg was added as a sponsor of SxSW Edu.
Maybe it’s a coincidence that no anti-cheating discussions will be on the stages at SxSW Edu while Chegg is writing them checks. But it is frustrating that institutions and publications and events that supposedly know about education and supposedly care about education, keep taking Chegg’s tainted money while pretending not to know what they do.
If you have the stomach for it, jump over and tell me what it’s worth for Chegg to share that sponsorship space with those schools and organizations and media outlets - without a word of reality about the services they actually sell.
I do understand how hard it is to turn down money. But it’s important to remember that the opposite side of the Chegg equation has the word “integrity” in it.