OpenAI/ChatGPT Does Not Care About Integrity
Plus, a class note. Plus, an HR publication writes on AI cheating cases, very poorly.
Issue 308
Subscribe below to join 3,950 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 18 amazing people who are chipping in a few bucks via Patreon. Or joining the 38 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month. Thank you!
Class Notes
You may have noticed that I’ve been unable to maintain the advertised cadence of two issues of The Cheat Sheet per week.
Which feels like a decent place to mention that it’s just me. I have no staff or assistants, or interns. Two or three very generous people do volunteer to proofread the newsletter from time to time, for which I am very grateful. But that’s it.
Mostly, the absence has been due to my investment in my new company, which I can now share publicly.
It’s called AdStorm (www.MyAdStorm.com) and it’s been in development for seven months, initiated three patent applications, required some sustained travel, the faith of investors, and countless hours of persistence. AdStorm opened to the public last week and I really do think it’s going to change things in a very stale, insular, heavily guarded, though massively important niche — political TV advertising.
Since political candidates began advertising on TV more than 50 years ago, the process has evolved nearly not at all. Candidates and issue committees court wealthy donors and use their money on political consultants who write TV ads, and on TV time to run the ads. The chain of people who decided what TV ads were made, as well as how they were shared, was restricted to rich people and invested insiders.
Worse, to afford TV ads, candidates have had to barrage all of us begging for money. But any money that regular outsiders give to candidates rarely actually goes on TV. And what little does, it’s to air the ads written by the same insiders, talking to the people they want to talk to about the things they want to talk about.
It’s a terrible, exclusive, wasteful system.
AdStorm breaks that system.
Now, and for the first time, anyone can decide what ads go on TV, as well as where and when they air. After signing up for a free account, any AdStorm user can browse TV ads, pick one with the message they care about, and pay to put that ad on TV in selected, important markets — cutting out the big donors and consultants who would prefer to use our money and make those decisions for us.
Just as important, by making TV ads accessible to anyone, AdStorm will also break the lock on ad creation. Now, distributing good ads — getting them seen by voters — is no longer an impediment. Soon, anyone will be able to create political TV ads and get them on TV, if other people want to air them.
This has zero to do with academic integrity. But I am very proud of what we’re doing and delighted to share it with you. If you’d like to support the effort, or make a real difference in these crucial American elections, I invite you to go to www.MyAdStorm.com, sign up for free, and put an ad on TV. Or several ads. You can also follow AdStorm on social media. The links are on the homepage.
Whether you do or not, I appreciate your indulgence.
Now back to our somewhat regular programming.
OpenAI, Maker of ChatGPT, Is Not Interested in Integrity
It’s a year since I started pointing out that OpenAI, the maker of ChatGPT, is not interested in academic integrity and honesty, or perhaps any other kind.
It should have been obvious why. And now it is.
Like Chegg and many others before it, integrity is bad for OpenAI’s business. It’s that simple. The company has admitted it.
First, a quick review and reminder of some key mileposts. For about two minutes, OpenAI made a tool that could detect AI-generated text. They closed it and lied about the effectiveness of AI detectors.
Earlier than that, OpenAI said they could, and would, watermark the text their chatbot spit out, making it easy to spot. Now, it turns out that OpenAI not only explored watermarking, they built a tool to find it — a tool that recent reporting by the Wall St Journal says is 99.9% accurate. They have had this tool for a year.
But the company won’t release it. Because allowing people to spot AI garbage will undercut their ability to sell AI garbage. Or more accurately, it will undercut the ability of their customers to pass off AI garbage as something of value — to lie, in other words. And, it also turns out that allowing people to deceive others with AI is a big part of OpenAI’s business, a business they’d prefer not to lose.
That’s not conjecture.
Consider this headline from, of all places, The Verge:
OpenAI won’t watermark ChatGPT text because its users could get caught
Or this one over at Business Insider:
Yup, AI is basically just a homework-cheating machine
Or this, from Gizmodo:
OpenAI Reportedly Hesitant to Release ChatGPT Detection Tool That Might Piss Off Cheaters
To which I say, welcome to the conversation, WSJ, Verge, Gizmodo, Business Insider and others.
The Business Insider piece reports:
Two new pieces of information point us toward a conclusion we probably all knew in our hearts: Chatbots and generative AI are a bonanza for students looking for writing help.
I mean, fair. But this conclusion did not need new information. Some of us have been saying it for quite some time now.
The piece cites reporting from the Washington Post showing that about 18% of ChatGPT’s users enlist it for “homework help.”
Cough.
On top of which is the new news that OpenAI can make its text clearly detectable, but won’t. Citing the WSJ reporting, Business Insider writes:
But here's the other part: The report said, "OpenAI surveyed ChatGPT users and found 69% believe cheating detection technology would lead to false accusations of using AI. Nearly 30% said they would use ChatGPT less if it deployed watermarks and a rival didn't." (Emphasis mine.)
To be clear, the bold and parenthetical are from Business Insider, not me.
Many, many people have invested considerably in the idea that AI detection is a threat because of “false accusations.” Aided by well-meaning but misguided academics and some press, the biggest investors have been the people who sell AI.
Cough-rammerly, to name one.
Anyway, the real deal is that OpenAI knows that allowing people to detect their product will cost users, and not just a few. Let me rephrase — OpenAI knows a big chunk of its users are using their products for fraud and they want that to continue.
Again, if you’ve been following this at all, this is no revelation.
The reporting in The Verge says of OpenAI’s holding back its AI detection:
the company is divided internally over whether to release it. On one hand, it seems like the responsible thing to do; on the other, it could hurt its bottom line.
And there it is. OpenAI could do “the responsible thing.” Or it could make money. And the company can’t decide.
But before the prosecution rests, here’s this, from the coverage in Gizmodo:
[OpenAI] claimed that while [the detector is] good against local tamping, it can be circumvented by translating it and retranslating with something like Google Translate or rewording it using another AI generator. OpenAI also said those wishing to circumvent the tool could “insert a special character in between every word and then deleting that character.”
So, yes. Not only has OpenAI actively sat on a tool that could curtail academic and other kinds of fraud, when the tool became public, OpenAI put out a statement telling everyone how to beat it.
Let me repeat for those who may need to hear it again — when it comes to academic integrity, the value of learning, the worth of effort, and the entire proposition of academic credentials, OpenAI is not your friend. They are on the other side. And they have been since day one.
HR News Writes on UK Universities and Penalties for Using AI
A publication called HR News has done up a story on the penalties that UK Universities have been handing out for academic misconduct related to AI use.
It shows that Birmingham City University (402) and Leeds Beckett University (395) have handed down the most sanctions for misusing AI between 2022 and 2024.
I share it here because it’s the kind of reporting that is rather dangerous, in my view. It says, for example:
Students at Birmingham City University have turned to AI the most to cheat, with 402 total instances in the last two academic years. The university suffered the most when artificial intelligence first boomed it seems, with 307 of those instances taking place in 2022/2023.
In comparison, Birmingham Newman University reported zero offenses.
Maybe it’s just lack of critical thinking showing up in the writing or perhaps a deeper misunderstanding of academic integrity action, but the way HR News reports that is simply wrong. In what should be obvious, the most sanctions does not mean “turned to AI the most to cheat.”
For one, population sizes matter, which means rates are probably more informative than counts. But more importantly, at 402 citations, I’d wager Birmingham City is doing it right. Birmingham Newman, with zero citations, is more than likely a disaster. In other words, lack of cases and lack of citations does not mean lack of cheating. It means no one is catching it.
HR News says that joining Birmingham Newman with zero cases are:
On the other end of the scale, UK universities that reported zero penalties for students using AI to cheat include University of Cambridge, Royal Conservatoire of Scotland, University of London, University of Gloucestershire and Royal College of Art.
So, again, zero cases is a strong indicator that things are off the rails.
I mean, I like that an HR publication is sharing misconduct information because schools won’t have much motivation to really take cheating seriously unless and until it starts to impact the perceived quality of their graduates and programs. If HR managers begin to pass over graduates from schools where cheating is essentially allowed, the school may take action.
Unfortunately, here, HR News has it completely backwards and has clearly laid the incentives upside down. The last thing we want — the last thing HR departments want — is for schools to stop looking for, stop reporting on, stop sanctioning academic misconduct. And although it does not come right out and say it, that’s what this piece does.
If you’re hiring, you want graduates from the schools with high rates of misconduct inquiries and sanctions because if your job candidate is from that school and has no marks of misconduct, that’s the best indication that they personally did not cheat. I’m not confident that hiring leaders get that. And that’s the story that HR News should have written.