Student Newspaper Promotes Cheating Services for Cash
Plus, a high school student uses AI, gets punished, his parents sue. Plus, a writer tests AI detection again. Spoiler alert: they work.
Issue 317
Subscribe below to join 4,152 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 41 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month. Thank you!
Ball State University Student Paper Advises “Bypass AI Detectors”
The Daily, the student newspaper at Ball State University in Indiana, ran an article recently with this headline:
Best Way to Remove AI Plagiarism from Text: Bypass AI Detectors
So, that’s pretty bad. There’s no real justification that I can imagine for advising students on how not to get caught committing academic fraud. But here we are.
The article is absent a byline, of course. And it comes with the standard disclaimer that papers and publishers probably believe absolves them of responsibility:
This post is provided by a third party who may receive compensation from the products or services they mention.
Translated, this means that some company, probably a soulless, astroturf digital content and placement agent, was paid by a cheating provider to place their dubious content and improve their SEO results. The agent, in turn, pays the newspaper for the “post” to appear on their pages, under their masthead. The paper, in turn, gets to play the ridiculous and tiring game of — “that’s not us.”
We covered similar antics before, in Issue 204.
Did not mean to rhyme. Though, I do it all the time.
Anyway, seeing cheating services in a student newspaper feels new, and doubly problematic — not only increasing the digital credibility of companies that sell deception and misconduct, but perhaps actually reaching target customers. It’s not ideal.
I did e-mail The Daily to ask about the article/advertisement and where they thought their duties sat related to integrity and fraud. They have not replied, and the article is still up.
That article is what you may expect. It starts:
Is your text AI-generated? If so, it may need to be revised to remove AI plagiarism.
You may need to remove the plagiarism — not actually do the work, by the way — because, they say, submitting “AI-plagiarized content” in assignments and research papers is, and I quote, “not advisable.”
Do tell, how do you use AI to generate text and remove the plagiarism? The Ball State paper is happy to share. Always check your paper through an AI-detector, they advise. Then, “it should be converted to human-like content.”
The article continues:
Dozens of AI humanizing tools are available to bypass AI detectors and produce 100% human-like content.
And, being super helpful, the article lists and links to several of them. But first, in what I can just barely describe as English, the article includes:
If the text is generated or paraphrased with AI models are most likely that AI plagiarised.
If you write the content using custom LLMs with advanced prompts are less liked AI-generated.
When you copied word-to-word content from other AI writers.
Trying to humanize AI content with cheap Humanizer tools leads to AI plagiarism.
Ah, what’s that again?
Following that, the piece offers step-by-step advice to remove AI content, directing readers to AI detectors, then pasting the flagged content into a different software and:
Click the “Humanize” button.
The suggested software, the article says:
produces human content for you.
First, way creepy. Second, there is zero chance that’s not academic cheating. Covering your tracks is not clever, it’s an admission of intent to deceive.
And, the article goes on:
If you successfully removed AI-generated content with [company redacted], you can use it.
Go ahead, use it. But let’s also reflect on the obvious — using AI content to replace AI content is no way removing AI content.
Surprising absolutely no one, the article also suggests using QuillBot, which is owned by cheating titan Course Hero (now Learneo), and backed by several education investors (see Issue 80).
Continuing:
Quillbot can accurately rewrite any AI-generated content into human-like content
Yes, the company that education investors have backed is being marketed as a way to sneak AI-created academic work past AI detection systems. It’s being marketed that way because that is exactly what it does. These investors, so far as I can tell, seem not the least bit bothered by the fact that one of their companies is polluting and destroying the teaching and learning value proposition they claim to support.
As long as the checks keep coming - amiright?
After listing other step-by-step ways to get around AI detectors, the article says:
If you use a good service, you can definitely transform AI-generated content into human-like content.
By that, they mean not getting caught cheating.
None of this should really surprise anyone. Where there’s a dollar to be made by peddling unethical shortcuts, someone will do it because people will pay.
Before moving on, let me point out once again the paradox — if AI detectors do not work, as some people mistakenly claim, why are companies paying for articles such as this one to sell services to bypass them? If AI detection was useless, there would be no market at all for these fingerprint erasure services.
Student Uses AI on Assignment, Gets Bad Grade, Parents Sue
There’s so much wrong with this, I am in knots about where to start.
For ease and brevity, I will skip the part about parents suing a school over a child’s grade. Although I do think the increasing litigiousness around academic misconduct — cheat, get caught, get sanctioned, sue — is a real, and really serious problem.
Moving on, several outlets have the story.
According to the coverage, a student in an AP History course submitted an assignment partly created or partly aided by generative AI. How much and for what is a question. The student’s parents say the AI was used for research and an outline. Though, if that is true, it’s unlikely that the school/teacher would have known. Absent a declarative admission by the student, to be discovered, it’s likely that AI-generated content was in the submitted assignment. We can’t know because, as is common in these situations, the school is not commenting.
The student who used AI was assigned a grade of 65 and a Saturday detention. The parents say the low grade has blemished his transcript and lowered the likelihood that he will be admitted to an elite college. They also say that the school had no policy on AI use and lacks the ability to create one, since the state did not. Although the reporting does say that:
the school's handbook does forbid the use of "unauthorized technology" and "unauthorized use or close imitation of the language and thoughts of another author and the representation of them as one's work" to complete assignments.
I’m obviously not the judge who will decide this case but “unauthorized technology” seems to cover it. If the teacher did not authorize generative AI, even for research, which the D grade and detention definitely imply, then I do not know what there is to debate.
It’s also curious to me that at least the coverage of the legal challenge does not say that the parents are claiming the AI detection system did not work. It’s true that we don’t even know if one was used here, though, again, I don’t know how else the teacher would have known. The claim, it seems, is that the policy was not clear and specific enough.
However this particular case plays out, it ought to be a colossal warning to schools at all levels to review and tighten their AI use policies related to misconduct. Because surely, more of this kind of thing is coming (see Issue 305 and Issue 279).
A Writer Tests AI Detectors. Again. Says They Are Getting “Dramatically Better”
A writer over at ZDNet decided to test AI detectors. Again.
I’m not going to spend too much time on it for a few reasons but here is the headline and subheader:
I tested 7 AI content detectors - they're getting dramatically better at identifying plagiarism
Three of the seven AI detectors I tested correctly identified AI-generated content 100% of the time. This is up from zero during my last round of tests.
I am not putting much into that “up from zero” last time because that unscientific test was in January 2023, just weeks after ChatGPT launched. Detection technology was sparse and weak — the big girls and boys had yet to join the race. Further, this first test checked only three systems, none of which I know well at all.
In October of last year, the writer added three more detectors, including a few that are well-known to be awful. Then, for this most recent test, he added a few more:
I'm adding QuillBot and a commercial service, Originality.ai, to the mix.
Adding QuillBot as a detection tool, and testing it for that, tells me this exercise lacks some significant credibility. Also, as we see all the time in these things, the best-in-class tools are not tested.
Anyway, in testing human and AI-written text, the story’s author found good results in parsing the human content. He says “5 of 7 correct.” I actually think it’s six, but whatever. The clear failure was:
QuillBot: 45% of text is likely AI-generated
Yikes. Look at QuillBot firing up false positives over there. But even with junk systems, and aside from QuillBot, the results are decent.
For the AI text, the results were passing, too. You can go check them if you like. The story says:
In our previous runs, none of the tests got everything right. This time, three of the seven services tested got the results correct 100% of the time.
Again — not great tools. But even here, three systems were right across the board. AI detection works, even though so many people believe exactly the opposite.
Our writer also offers:
While the overall results have improved dramatically, I would not be comfortable relying solely on these tools to validate a student's content.
No one should ever rely solely on AI detection tools. No one suggests doing this. In fact, the universal recommendation is to not do that.
Finally, I’ll share this bit where the writer asked ChatGPT to list three “plagiarism checkers” that can detect ChatGPT. ChatGPT started its response with:
It is worth noting that there is currently no plagiarism checker that is specifically designed to detect text generated by ChatGPT or other language models. While some plagiarism checkers may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, there is no guarantee that they will be able to definitively determine whether a piece of text was generated by a language model.
Again, we see that OpenAI/ChatGPT is invested in the idea that their created text cannot be reliably or accurately detected. “There is no guarantee that they will be able to definitively determine,” they say. It’s a talk track they have been on for some time (see Issue 241). It’s also easy to understand why. If it becomes known that ChatGPT text is easy to find, OpenAI’s value proposition takes a major hit.
Made my Thursday!.........................
"If the text is generated or paraphrased with AI models are most likely that AI plagiarised."
"If you write the content using custom LLMs with advanced prompts are less liked AI-generated."
I hope their coding skills are significantly better than their writing skills but, either way, I suspect everyone involved will get exactly what they deserve.