(377) Cheating with AI is "A Good Thing"
Plus, an Australian TV show smacks down a Chegg advisor, and the paper that ran his op-ed. Plus, summer break.
Issue 377
Subscribe below to join 4,760 (-6) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
AI Cheating is a “Good Thing”
The Free Press, the new sort of counter-narrative intellectual publication founded by Bari Weiss, ran an article (subscription required) on AI, cheating, and higher education, the latter being a frequent target.
I realize that the outlet’s self-created reason to exist is to make contrarian takes and say outlandish things. It’s no coincidence that controversy and provocation build an audience.
The headline of this piece is:
Everyone’s Using AI To Cheat at School. That’s a Good Thing.
This is, on its face, an indefensible thing to say.
Cheating is theft. It’s fraud, deception, and in many places, even illegal. It has actual victims, beyond the cheater. Putting “cheat” and “good thing” together is disqualifying from polite, or honest, conversation.
And I do understand that the author is making a different point than a straightforward defense of fraud. But it’s still irresponsible. At best. Any decent editor or publisher should have reined it in.
The author is Tyler Cowen, a professor at George Mason University, in Virginia. I mention this because the subheadline of the article is:
It’s not just the college students. I’m a professor—and these AI models offer keener, smarter, and more thorough suggestions than I do.
That’s sad. And I wonder why anyone would take to public spaces to share, in writing, that they are not as smart as a word-picking bot.
I give Cowen credit, however, for starting his article with this truth:
There’s an epidemic of cheating in American education right now.
I mean — yes, there is.
In what sounds like an ad for generative AI, the piece continues:
These [AI bots] are capable of doing extraordinary things: writing your persuasive essay in under a minute; knowing virtually all of history; and performing first-rate synthetic analyses of complicated questions.
Neato.
On cheating, it goes on:
Accurate data is hard to come by, but one estimate suggests that up to 90 percent of college students have used ChatGPT to do their homework.
And:
As current norms weaken further, more students learn about AI, and the competitive pressures get tougher, I expect the practice to spread to virtually everyone.
And that cheating:
is wrecking a lot of our educational standards.
OK.
And that:
Unlike many people who believe this spells the end of quality American education, I think this crisis is ultimately good news.
Ah, there it is. Wrecking educational standards is good news. Again — this is not, in my view, a credible position worthy of even the cheapest of digital ink. Cowen says this nearly universal destruction of educational standards is good because:
I believe American education was already in a profound crisis—the result of ideological capture, political monoculture, and extreme conformism
Blah, blah, blah. I don’t like it, so I cheer for the flames that will burn it down. Sounds logical and mature.
I don’t want to give crazy any more oxygen but will do one more, so as to buttress the point. Our author says of the LLM, AI bots:
These models are such great cheating aids because they are also such great teachers. Often they are better than the human teachers we put before our kids, and they are far cheaper at that. They will not unionize or attend pro-Hamas protests.
Whatever, man. I don’t like the liberal approaches of many of my colleagues, so I want to see the entire system collapse. Nice.
Moreover, if the bots are “such great teachers,” and “cheaper,” and “smarter,” and “more thorough,” — resign. Put your money where your byline is. March into your Dean’s office and quit. Stop wasting everyone’s time and money.
After barking at the moon over Hamas, unions, and monoculture, Cowen writes:
The first problem the LLMs expose is that our evaluation systems are broken, inefficient at sorting, and also unfair. If one student gets an A and the other a B, do we know that reflects anything other than a differential willingness to use LLMs?
OK — two things.
One, he’s assuming — I guess — that the student getting the A used a generative AI tool to write or maybe improve their work. I guess? And if that’s the case, that’s sad. Again.
How many professors have derided the quality of bot-speak, using words such as junk, garbage, gibberish, and worse? Anyone who knows anything about the quality of what a chatbot spits out knows it’s adequate, at best. If this professor thinks junk gibberish is the obvious A, I think that says plenty.
Two, he asks “do we know” if the product — and in his mind, the grade — are the result of using AI? I don’t understand this question, honestly. I mean, we can know. If we want to — not hard to find out.
But I sense Professor Cowen does not want to go too far down that trail because it destroys the point he’s struggling to make.
Cowen goes on:
The second problem is that the current proposed solutions will make things worse. For instance, I commonly hear the following as potential remedies: Enforce anti-AI rules through the honor code; grade based only on proctored, closed-book, in-class exams; and give oral exams.
But if the current AI can cheat effectively for you, the current AI can also write better than you. In other words, our universities are not teaching our citizens sufficiently valuable skills; rather we are teaching them that which can be cloned at low cost. The AIs are already very good at those tasks, and they will only get better at a rapid pace.
This is word stew. It’s borderline incomprehensible, to say nothing of outright illogical.
If I understand, enforcing anti-AI policies through the honor code, or altering assessment modes so as to limit the opportunity to use AI, will make things worse. Trying to stop the cheating is a less favored outcome somehow. How is that? Because, I’m guessing, it can write better than you (can). Huh?
Let’s assume that’s true, which I definitely do not think is the case. But assuming that a bot can exceed your abilities to write, how does this mean we should not try to stop cheating? One thing has nothing to do with the other.
Further, Cowen’s thought seems to be that trying to teach students to write well is a waste of time because bots can do it better, and cheaper. Kind of like what he said about teaching, only in this case, Cowen outright calls learning to write a not sufficiently valuable skill. Since he feels similarly about teaching, I say again — quit.
He goes on:
Whatever you think of the intrinsic merits of the proposed solutions—can a tougher honor code really work?—they are missing the point.
Well, can the honor code — i.e., enforcing academic integrity rules — work? I think so. I’d at least like to really try that before we decide that writing and teaching are a waste of time.
For Cowen, the question misses the point because his point is not about cheating at all. It’s about how much he despises academia. It’s a game he gives away in the next sentence, saying that the “problem” with these solutions is that:
they implicitly insist that we must do everything possible to keep wasteful instruction in place.
Dude, you’re an instructor. Quit. I’m really not sure he’s good at it anyway, since he writes about AI-generated comments he received from an OpenAI chatbot regarding a paper he received from a PhD student:
Maybe they are not all on-target—how would I know?!
That’s top-shelf teaching right there. Quality stuff.
One more:
Suddenly we are realizing that the skills we trained our faculty for are also, to some degree, obsolete. Would it be so crazy to put [an AI bot], or some other advanced AI model, on the dissertation committee, in lieu of the traditional “outside reader”?
My man — resign. But he won’t. I got a fiver on that he likes his college job and his big college paycheck just fine.
There’s more to the article. Unfortunately. But I’m done with it. It’s not about cheating, or even AI. It’s about how teaching — and college specifically — is bad because unions. Or something. Either way, it’s not serious.
Finally, I was curious, since the author is so big on AI’s ability to write better than you/we can. I dropped the piece in a good AI checker and it came back 100% human. Eighteen hundred words about how teaching students to write is a waste of time and it seems he wrote it himself. Interesting.
Chegg Advisor Lectures Universities, Gets Ripped on TV
In May, Michael Burgess, a “senior advisor” to Chegg, wrote a piece (subscription required) that ran in Australia’s Financial Review.
You’ve seen these thoughts a million times. Colleges are stuck. Slow. They’re not embracing technology fast enough. They’re not embracing AI. They’re scared. They don’t partner with business enough. Google is better. Colleges are falling behind.
Tedious. Boring. Torn right from the news of 2004.
I am sure every college and university leader in Australia was excited to hear how an advisor to one of the largest cheating companies thinks they should run their schools. Brilliant.
The Financial Review does identify Burgess as a Chegg advisor, but nothing more.
And it was this lack of context regarding who the author is, and what Chegg is, that drew a throw-down from a national TV show in Australia called Media Watch. You can see it at about the 12:25 mark in this video, sent in by a reader.
Burgess’s condescending and lecturing tone — telling schools what to do — clearly did not help much either.
The clip is only about four minutes long and worth the time, I think. Probably because it’s the kind of thing I try to do from time to time — calling out media for being too quick, too casual with Chegg and companies of similar illicit ilk. Illicit is too kind, their services are likely even illegal in some places.
The TV response deals with Chegg and cheating extensively.
There’s also a little call out to The Cheat Sheet at about the 14:08 mark, which is nice.
I’m not sure that’s how Chegg, or Burgess, saw that playing out — a national audience reminded of Chegg’s cheating model and related legal troubles. But I am here for it.
Class Note: Summer Break Breaks
As threatened, this is the last Issue of The Cheat Sheet that I will personally write for a few weeks. I will start again on Tuesday, August 18. Do not say you were not warned.
To fill this gap, I’ve asked for submitted announcements, articles, thoughts and news — anything related to academic integrity. With the first not-by-me Issue looming this Thursday — day after tomorrow — I am nervous, having received exactly zero anything.
And I confess, I do not know what to do. With nothing to share and really favoring a break, The Cheat Sheet could just go dark for a month. I’d hate that, since our relative consistency is a point of pride — 375 issues and whatnot. But maybe that’s what will happen.
As such, I ask again. If you have anything to share with our nearly 5,000 subscribers, please send it along. Anything 1,000 words or fewer ought to be good.
My e-mail is Derek (at) NovemberGroup (dot) net.
Thank you.
If a member of my committee had put my thesis through an LLM instead of reading it I would be incandescent with rage--and demanding his resignation.