(349) The Wall St Journal: There’s a Good Chance Your Kid Uses AI to Cheat
Plus, Indiana Fire Chief loses his licenses for exam cheating ring. Plus, a photo you must see, from India. Plus, more on cheating providers.
Issue 349
Subscribe below to join 4,512 (+34) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year.
The Wall St. Journal Digs In on AI and Academic Cheating
The Wall St Journal recently ran a major piece on AI and academic cheating (subscription required).
It’s a very solid piece, and keeping the issue of academic misconduct in the public attention is a good thing. The headline and subheader are:
There’s a Good Chance Your Kid Uses AI to Cheat
More students are hiding their secret weapon from parents and teachers, racking up top grades while AI chatbots do the work
I mean — true. There is a good chance that any given student is cheating with AI, taking credit for work that is not their own, and hiding it.
Disclosure: one of the writers of this piece interviewed me several months ago. I was not quoted, which happens.
The piece starts:
A high-school senior from New Jersey doesn’t want the world to know that she cheated her way through English, math and history classes last year.
Again — true. My only add is that I’ll bet this senior is headed to college somewhere. And I’ll open the floor — anyone want to guess what she will do when she gets there?
Anyway, the piece continues that this student’s practices are indicative of:
allowing a generation of students to outsource their schoolwork to software with access to the world’s knowledge.
Let’s be clear what “outsource their schoolwork” means. It means not doing the work of learning. It means, in all likelihood, not learning. It means academic fraud and credential theft — taking something you did not earn.
A generation of students, the Journal says.
WSJ writes:
teachers and parents are left on their own to figure out how to stop students from using the technology to short-circuit learning. Companies providing AI tools offer little help.
I’ll go further. Companies providing AI tools do not want to help because they do not want to stop the cheating. Cheating is their use case, their customer base. But my disagreement is by degree, not fact. Again — true.
Here’s a longer section:
The New Jersey student told the Journal why she used AI for dozens of assignments last year: Work was boring or difficult. She wanted a better grade. A few times, she procrastinated and ran out of time to complete assignments.
The student turned to OpenAI’s ChatGPT and Google’s Gemini, to help spawn ideas and review concepts, which many teachers allow. More often, though, AI completed her work. Gemini solved math homework problems, she said, and aced a take-home test. ChatGPT did calculations for a science lab. It produced a tricky section of a history term paper, which she rewrote to avoid detection.
The student was caught only once.
Work was boring or hard. She wanted a good grade.
Work is hard. I want money. So, I’m just going to take it.
Also, quickly, it’s academic malpractice to give a take-home test. I just …
Just say you don’t care.
Moving on:
Around 400 million people use ChatGPT every week, OpenAI said. Students are the most common users
Again, yes. ChatGPT’s core users are students. Open floor again — any guesses as to why that is?
More:
Of students who reported using AI, nearly 40% of those in middle and high schools said they employed it without teachers’ permission to complete assignments, according to a survey last year by Impact Research. Among college students who use AI, the figure was nearly half. An internal analysis published by OpenAI said ChatGPT was frequently used by college students to help write papers.
Is anyone paying attention? Is this thing on?
The Journal quotes OpenAI, maker of ChatGPT:
AI companies play down the idea that academic dishonesty is their problem. “OpenAI did not invent cheating,” said Siya Raj Purohit, who belongs to the education team at the company. “People who want to cheat will find a way.”
I cannot well articulate how much this quote enrages me.
No one said OpenAI invented cheating, you absolute child.
Moreover, this does not excuse intentional malicious conduct. This is a weapons manufacturer saying, “we didn’t invent war, people will find a way.” No, they just sell the guns and bullets. This is Purdue Pharma saying, “we did not invent opioid addiction, people will find a way to get high.” Can’t blame us for making billions of dollars by flooding the marketing with cheap, accessible narcotics.
It is simply stunning.
Especially since it is well documented that OpenAI can limit cheating, or other illicit use of its products, but it has chosen not to (see Issue 308). Or worse, outright lied about the ability to do so (see Issue 241).
Example, from this story:
OpenAI has developed a tool that can reliably detect ChatGPT-generated writing but has declined to release it, the Journal found. An internal survey found that nearly 30% of users would use ChatGPT less if OpenAI rolled out the feature.
Exactly. OpenAI would risk 30% of its users if the text it produced was easy to find. Open floor again — anyone want to raise their hand on this one?
Let me be as clear as I know how to be. OpenAI is doing something bad, by choice.
Also, before I let this go, Purohit, the OpenAI spokesperson is also cited as saying:
Perhaps critical thinking and communication skills should be measured by the ability to use AI well, she said. “What is the value of an essay?” she asked, rhetorically
This person is on OpenAI’s education team.
Putting a cherry on it, the article also reports:
[OpenAI’s] vice president of education, Leah Belsky, suggested that schools combat cheating by welcoming AI into the classroom.
Combat, by opening the gates. Somewhere, George Orwell is laughing.
Meanwhile, the WSJ also quotes John B. King Jr., “chancellor of the State University of New York system and the former education secretary,” who was on a panel with OpenAI’s Purohit. King says:
“There are probably lots of students, K-12 and higher ed, who used ChatGPT to do their homework last night without learning anything,”
King continued:
“That’s scary.”
First, what in Anne of Green Gables is the SUNY Chancellor doing on stage with OpenAI?
But also — “that’s scary”?
If only he was in a position to do something about it, like if he was a Chancellor or something. That would be awesome.
With respect, this is not a serious thing to say. Gee whiz, this is probably happening and it’s scary. Fiddlesticks.
Just imagine, as a thought experiment, the head of the FDA or the DEA on stage with executives from Purdue Pharma saying something like, “There are probably lots of people who used Oxycontin to get high tonight, without a medical need. That’s scary.”
Lifting the line from the movie Office Space:
(breathe)
More from WSJ:
Jacob Moon, a high-school English teacher in Coosa County, Ala., said that he rarely used to see evidence of cheating in his class. So far this school year, though, Moon has caught roughly two dozen students using AI for assignments, including essays, he said.
“What scares me as a teacher the most,” Moon said, is “what happens when they go into college and the workforce?”
Chris Prowell, a sophomore at the school, said classmates use AI to complete assignments all the time. But he doesn’t, fearing he would be ill-prepared for college. Rampant AI cheating, he said, “discredits people that actually work hard on something.”
Another great example, from the Journal:
Some educators are skeptical that students can use AI responsibly while doing work at home. Joshua Allard-Howells, a high-school English teacher in Sonoma County, Calif., said AI cheating spread quickly among his students last year.
He now requires them to write their first drafts in class by hand, with computers and phones prohibited and out of reach. The change has produced an unexpected benefit, he said: Students take more time on their work, and their writing is more authentic.
The downside: he can’t assign homework. “If I do, it just gets cheated on,” he said.
Here’s another:
Patricia Webb, an English professor at Arizona State University, typically bars AI use in her classes. Yet on some writing assignments, she suspects 20% to 40% of her students use it anyway, based on the writing styles she observes.
But without definitive evidence, she said, she rarely confronts those students. “If you can’t prove it, you can’t make the charge,” Webb said. That means she often awards passing grades for work she is almost certain was generated by AI.
This is great work from the Journal.
I’m glad as well that the article touches on “humanizers,” or as the Journal describes:
Dozens of companies advertise apps they claim can write essays and complete homework with AI software that can’t be detected.
The paper highlights one that, “was valued by investors at nearly one billion dollars.”
If you can, jump to the article to read what WSJ reported about one such cheating company, Caktus AI. I’ll share only that the CEO of this company said:
“I can’t control the way the students interact with the platform.”
Sure, it’s not like he invented cheating or something.
And by the way, he can control the way students interact with his platform. It’s profitable not to; he’d rather have the money.
In wrapping up, the WSJ turns to AI detectors — two specifically, Turnitin and Pangram. And I think, quite fairly. It does not say that AI detection is perfect; it’s not. It also does not say that AI detectors are unreliable or inaccurate. The good ones are reliable and accurate. The article also avoids the nonsense about “false positives,” which is great.
The article ends:
Carter Wright, a high-school English teacher outside of Houston, Texas, said he has spent hours chasing AI plagiarism, using free trials of detector software and checking edit histories in students’ Google Docs. His students always seemed always one step ahead.
“It’s almost impossible for me to stop all the cheating unless we were to completely get rid of the technology,” Wright said.
Follow-Up: Indiana Firefighter Has Licenses Revoked for Exam Cheating Ring
In Issue 205 we covered an Indiana fire Chief who was accused of helping recruits and other charges pass their license and certification exams. That Chief, has had his state licenses suspended for three years by a review panel.
From the coverage:
The Indiana Board of Firefighting Personnel Standards and Education voted Thursday morning to revoke all 26 firefighting certifications earned by Muncie Fire Capt. Troy Dulaney — a punishment that far exceeds recommendations by an administrative law judge and a decision that Dulaney and his attorneys call improper.
The revocation is because:
The Indiana Department of Homeland Security investigated and determined Dulaney had pressured new recruits to share test questions and answers with him after they took their state and national certification exams — a practice which violates state and national testing rules. According to investigators, Dulaney also coordinated a scheme to share actual test questions and answers with other firefighters and EMTs, which is another violation of state rules.
The Chief’s lawyer says they will appeal the decision to a legal court. Meanwhile:
The city of Muncie has filed formal misconduct charges against Dulaney, who now faces a separate disciplinary hearing before the Muncie Fire Merit Commission to determine whether he’ll be fired by the city.
Also, meanwhile:
He has been on administrative leave for nearly two years — collecting his full salary — while his appeals play out.
A Photo and Cheating — from India
These two things are true.
Sometimes, a photo is actually worth a thousand words, and academic cheating in India is next level.
With that, authorities in India busted another exam cheating syndicate. This one, taking remote exams for students. The paper says the cheating is bad.
But you should click the story for the photo.
Class Notes, Kinda — On “Cheating Providers”
In the last Issue of The Cheat Sheet, I referred to OpenAI, Chegg, and Grammarly as “notable cheating providers.”
Since comments are open for The Cheat Sheet, a reader commented that:
Lumping Chegg, OpenAI, and Grammarly together as “cheating providers” is absurd.
You can read the full comment, and my responses, if you’d like.
But the comment got me thinking about what makes a cheating provider and, in one of my responses, I wrote this — which I think is worth sharing:
To me, being a "cheating provider" is a simple, three-part test. A company or person is a cheating provider if:
1) Their products or services are used by students to cheat, to sidestep learning and obtain grades or credit based on work that does not represent their honest effort. If students use a company's products to cheat, in other words.
2) The company knows this. They are aware that their products and services are being used to cheat.
3) The company could take steps to stop, or at least limit, the use of its products for cheating, but does not.
If a company meets all three, they are knowingly selling or providing cheating services, allowing students to cheat, enabling the misconduct -- usually, for money. That makes them a cheating provider.
Chegg, Grammarly, and OpenAI meet all three elements. Their products are used to cheat, they know it, they decline to stop it.
Disagree?