OpenAI/ChatGPT is Selling Tools to Sidestep AI Detection
Plus, a company hawking AI detection bypass tools probably wrote its press release using AI, which an AI detector promptly caught. Plus, International Quick Bites.
Issue 268
To join the 3,741 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 15 amazing people who are chipping in a few bucks via Patreon. Or joining the 31 outstanding citizens who are now paid subscribers. Thank you!
OpenAI and the GPT Store
You may be aware that OpenAI, the maker of ChatGPT, has launched a store in which people can make and sell GPT-based apps.
To be fair, I have spent exactly zero seconds on this GPT store. And I do not expect that number to increase.
But I found it not surprising in the least that, according to this post on LinkedIn, the store is already infested with apps that will help people evade AI detection. From the post:
The "Writing " category of the ChatGPT Store has the following description "Enhance your writing with tools for creation, editing, and style refinement." However, the #1 tool, Humanize AI, is from a company called plagiarism-remover.com. Another one of the featured tools is called "Humanizer Pro" which claims that it "Writes text like a human, avoiding AI detection."
Yup. And I’ve seen other posts and comments along the same lines.
As I have written and maintained for some time now, OpenAI/ChatGPT is no friend of academic integrity — they are designed to plagiarize and deceive. The company has falsely and actively dismissed, discredited, and avoided any efforts to detect AI-generated text (see Issue 241). It has promised watermarking but done nothing to produce it. It has actively partnered with cheating companies (see Issue 203).
But I point this out mostly because OpenAI is now selling, and de-facto endorsing, tools specifically marketed to bypass detection, even though they said AI detection did not work. Something, as they say, is amiss.
It means that either OpenAI is selling a product they know does not work, or they were lying. You know where I am on that question.
The poster of the LinkedIn content above added:
Should the GPT Store host and feature tools that are designed to trick people into thinking that AI generated content is real?
Obviously, no. But — again — tricking people is OpenAI’s raison d'etre. It’s why it refuses to watermark its text. Why it says AI detection can’t be done. Why it sells tools that defeat detection. Why it partners with Chegg.
Let me say again — ChatGPT and OpenAI are no friend of integrity. And I don’t understand why people don’t understand that.
While We’re Here — Press Release from AI Bypass Company Does Not Bypass Basic AI Detection
Joseph Thibault, founder of academic integrity company Cursive Technology and friend of The Cheat Sheet, has the best and funniest LinkedIn post I’ve seen in some time.
The gist is that yet another company announced that its technology could fool AI detection. This one is called AI Bypass. Anyway, the company put out a press release touting, “a monumental day in the realm of AI technology,” saying:
this cutting-edge tool is designed to flawlessly bypass major AI detectors
Joseph ran the press release though an AI detector and — you guessed it — the release was flagged as 100% likely AI generated.
First, of course it was. Second, that’s not good business sense over there at Bypass AI, though it is probably expected from a company that sells deception and manipulation.
But I love it because it’s funny and because it allows me to ask once again — if AI detection does not work, what are these people selling?
Obviously, AI detection works. It works well enough, it seems, to even flag text from a company that claims its tools bypass detection.
Amazing.
International Quick Bites
In Kuwait, education leaders report that, “Despite Crackdown Measures, Exam Cheating Remains a Challenge,” and that 552 were sanctioned for cheating in recent national secondary-level tests.
South Africa is dealing with cheating issues in two of its provinces, according to news reports. Leaders are, “looking at almost 1,000 cases” of organized copying. The same news story also features news about the arrest of 11 people for “selling fake certificates.”
In Kenya, the government has withheld exam results from 600 students in a single province over suspicion of exam “malpractice.” Students and parents deny there was cheating.
In India, a man was arrested for impersonating a girl during a career certification exam. “He reportedly sported a wig, red bangles, a bindi, and a Punjabi suit,” according to the coverage. He was caught when his fingerprints did not match those on record. They checked fingerprints for a career exam.
Also from India, this TV story, though not in English, shows multiple examples of students openly cheating a university exam by copying, using mobile phones, notes, and other means.
Class Note
I will be traveling all day Thursday so it’s entirely possible that I won’t get the next Issue out that day. If I can’t, we will pick back up, right here, next Tuesday.
As always, if you have news, tips, comments, or invitations to fancy parties — you can reach me at derek@novembergroup.net