(359) K12 Dive Misses the Mark on Teens and Cheating with AI
Plus, a PhD research position for work on essay mills. Plus, a look at just one AI humanizer.
Issue 359
Subscribe below to join 4,647 (+11) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
K12 Dive Really Fumbles a Story on AI and Cheating
K12 Dive, a home in a large neighborhood of industry-focused publications from the same company, ran a story not too long on a survey of teen students and AI use.
Man-oh-man did they blow it.
Here is the headline:
Teens are embracing AI — but largely not for cheating, survey finds
This caught my eye, because it’s contrary to the accepted reality that students at every level are using AI to skip the actual work of academic work. The evidence is beyond the threshold of substantial. A finding such as this, in other words, would be news.
The story is based on a survey of AI use among teens by a pair of researchers at University of California, Irvine — Gillian Hayes and Candice Odgers. The problem is that the survey and underlying report don’t mention cheating. The words cheating, misconduct and integrity do not appear in the press release or report even one time.
If you dig into the K12 Dive story a bit, you do get this:
Hayes emphasized that one of the most surprising findings from talking to teens in focus groups was that, for the most part, teens are not using these tools to cheat.
So, error number one — the idea that younger students are not using AI to cheat is from a focus group, not from a quantitative survey, as the headline says. Or at least implies.
I spoke with Gillian Hayes, one of the researchers, and that’s true — the information about cheating with AI came from focus groups, not from the survey data. But that’s a relatively minor error.
The what-were-you-thinking? error is in taking an assertion and reporting it as fact. Kids — young teens — told a researcher they were not cheating with AI. This in no way means they are not cheating with AI. Yet, K12 Dive does not even report this as evidence, it reports it as conclusion. Teens embrace AI, “but largely not for cheating” is the headline.
Now, when I took journalism, this was entry-level stuff. You simply do not report things people tell you as fact, especially when the source is unreliable, biased, or motivated away from honesty.
Tell you what, be a reporter for 15 seconds.
I am actually an alien and the reincarnation of Nero, the fifth emperor of Rome. Do you report, “Alien reincarnation of Roman Emperor Lives in Maryland” or “Crazy Man in Maryland Says He is Alien”? If you report the latter, which you should, 80% of your story should be about how this is not true — you know, in service of reality and accuracy.
This is not complicated.
The story has not one single word about AI cheating, or misuse rates, or other data about cheating at all. They interviewed not one expert on the topic, which is strange for a story with “cheating” in the headline. Kids said they are not cheating so — they are not cheating. That’s it. Case closed.
The lazy butchery of the K12 Dive coverage aside, let’s take two minutes to review that teens in a focus group said they were not using AI to cheat.
For one, we know conclusively that people lie about their unethical or disapproved conduct. They rationalize. They hide and deceive (see Issue 276). For obvious psychological reasons.
Hayes, the researcher I spoke to, tends to give her survey subjects the benefit of the doubt on the cheating question. She points to the teens saying that AI has not been a source of conflict between children and parents, something she’d expect to see if the underlying AI use was inappropriate. “Teens and parents are in conflict always, but we see very little conflict around AI tools specifically,” she told me.
She thinks teens are still developing morally, and AI is just another space in which that plays out. “AI falls into a broader moral development world – what is fair, what even is cheating?” she said. But she also told me that parents are pretty clear about where these lines are. “Parents say AI outside is OK, in school it is not OK – there is an ick factor about it – when you are graded on it, when you have to turn it in, it’s ick. For fun is OK,” she said.
I buy most of that. Though I think I am considerably more skeptical about teens being in a figure-it-out mode about AI use and ethics. I think they know. But that is just a hunch. I’m also inclined to think that teens are using AI to cheat, but they are doing it secretly, knowing that such use is disallowed or in conflict with teachers and parents. To me, that’s why the teens don’t report AI a source of conflict, because they hide it (see Issue 349). And you can’t say it’s a source of conflict while also saying you’re not doing anything wrong.
With that context, I found it insightful when Hayes said about her focus group subjects, “They want to do the right thing, they want to do it in a way that does not lead to social shaming or having parents or teachers upset with them.” I think that’s exactly right. If the choices are a) do not use AI on your work, put in the time and work yourself b) take the shortcut and accept the shame and conflict or c) take the shortcut and say you didn’t — well, among teens, place your bets.
Also, for the record, Hayes said her focus groups were online, without the parents in the online room, and that the teens in the group did not know one another. I did not ask if she knew whether parents were in the actual room with their teens when they were participating.
Even so, given the peer and social shaming — as Hayes put it — I’d find it remarkable if any teen told a focus group leader that they were cheating, especially in front of unknown peers. Again, these teens may want to do the right thing, but they also want to avoid shame and conflict and to remain competitive with their peers for grades and academic advancement. If they are using AI inappropriately, expecting them to be honest about it is probably a bridge too far. I mean, I’ve interviewed dozens of college students about cheating. Not one has admitted it. They always say everyone else is cheating — just not them.
I also find it interesting that in the quantitative survey of young students:
72% of teens reported using these tools for entertainment, 63% for homework and only 40% for classwork.
Sixty-three percent are using AI for homework — self-reported. And we think none of that — or very little of that — is cheating? That it’s all by the book? That teens are asking AI for help and not using the answers directly? Maybe I am the only one, but I find that incredibly hard to believe. Like how people bought Playboy for the articles.
That quote above with the percentages, that was from the K12 Dive coverage, by the way. They reported that 63% of students said they used AI for homework, but that they’re not cheating. I just don’t get it. That’s awful work, to be honest.
In closing, I’ll share two more things from Hayes, who I found to be very thoughtful and candid. She said that she thought that the teens in her survey and focus groups were not like college students. “College students are different in this way – they tend to say that whatever tools I need to use are fine,” she said. Could be. I simply found it interesting.
Hayes also said that she found AI use by teens for academic work to be helpful for them in that it was a way for students to seek help without being embarrassed. The chatbot won’t judge, in other words. “Teens are getting help from AI as a way to not be embarrassed they did not know something,” she said. That makes sense, and I think that’s great. Figuring out where the non-judgmental learning happens and the cut-and-paste answer shortcuts are happening, that’s going to be difficult. It already is difficult.
PhD Position to Study Essay Mills
There is a research PhD position focusing on “paper mills,” offered by the good people at:
The Centre for Science and Technology Studies (CWTS) at Leiden University, in collaboration with the University of Sheffield
They mean essay mills. In the United States, paper mills make paper. Anyway, they want the candidate to:
carry out research on paper mills and related forms of systematic manipulation in research and publishing.
More:
The PhD candidate will produce original research that gains a deeper understanding of the scale and operation of paper mills, as well as of the complex interplay between the superstructure of incentives and varying research norms in scholarship that enable them.
Sounds like a great job and important work.
A Look at One AI Spinner
They are everywhere, like gnats after a spring rain — companies explicitly and directly offering to help students cheat by changing text composed by AI into text that will fool AI detection systems. They are called humanizers, or spinners. But they are academic fences — helping the dishonest launder and clean their stolen goods.
I cannot recall where I saw this one — Phrasly.ai
I share it because I am not sure how much time people, even highly interested and motivated people, spend in this world of intentional, designed deceit.
Like most others, it starts with a free AI detection, front and center:
The Phrasly AI checker boasts the highest accuracy rate in the industry. Paste your content below to instantly check for AI-generated text from ChatGPT, Claude, and more.
I say this all the time — a front-facing AI checker is big tell. Like, I did not murder anyone in my kitchen this weekend but I’m going to spray luminol everywhere and scrub with bleach. You know, to be safe. So no one can accuse me of it. Like the old-school radar detectors people used to use. I'm totally not speeding, but I sure would like to know if anyone is watching my speed. Just to be safe.
Sure.
It never occurs to anyone that a company such as this one — one that sells a service to help you beat AI detection — may lie to you about whether AI can be detected in your writing. Like the guy who comes to your house with a fancy gadget to check for radon leaks and — OH MY GOODNESS, DANGER! — you have a major radon leak. Thank goodness that the same guy can fix it, with a package savings plan this week. Lucky you.
Phrasly.ai is having a “spring special” right now — 45% off.
What a joke.
But I’d prefer to call your attention to the outright brazen approach of AI humanizers. Phrasly.ai says, right on the homepage:
Need to bypass AI detection? Automatically humanize your content with Phrasly!
Use Phrasly's AI rewriter to eliminate robotic phrasing so your content appears human-written. With Phrasly, you can easily bypass Turnitin and make your content undetectable by AI checkers.
They are not being sly or subtle. They sell a service to bypass detection and make your content appear human.
They convince you that AI checkers are the problem:
But AI checkers often flag content for review by instructors and discerning clients—even when it’s human-written.
I love, by the way, how they use “discerning clients.” How dare someone paying for your work be discerning.
But also note how they use “often” to describe errors. I simply cannot overstate how much damage the “false-positive” paranoia has done to honest work. Picking apart detection with shoddy research and invented human tragedy has enabled these bypass pirates by selling safe passage across waters that were never that dangerous.
Also like every other cheating provider and cheating enabler, this one is somehow anti-cheating. In the FAQ about whether the service is legal, the company says:
Yes, Phrasly is a legal and ethical AI writing tool. We do not condone or support plagiarism, and our tool is designed to help students improve their writing skills, not to cheat.
Legal and ethical. Wow.
Not only is Phrasly not legal everywhere, it’s ethical nowhere. It’s also unclear how getting AI to rewrite AI writing for the purpose of bypassing detection helps anyone write better. And they don’t condone it, they just want you to get away with it. We don’t condone murder, but if you need fast clean-up, call us. They literally say they are:
The #1 Solution to Bypassing AI Detectors
They even have a blog with such helpful topics as:
How to Write an Informative Essay and Avoid AI Detection: 8 Simple Steps for Success
And:
7 Ways to Craft Better AI-Generated Responses That Pass AI Detection
And:
Can Turnitin Detect ChatGPT?
Yes, they say. But not if you use their services. And:
Quillbot vs. AI Detectors – Does it Bypass AI Detectors?
I just love how they pick on other cheating providers like Quillbot.
Anyway, it goes on and on. The company even has an affiliate program, where you can earn passive income by helping people get away with fraud. How nice.
Again, my aim to is share how forward and obnoxious these companies are and remind you that this is just one of them.