393: "The AI Cheating Epidemic"
Plus, a professor in Australia sounds the alarm. Again. Plus, an anti-cheating math tutor app gains traction.
Issue 393
Subscribe below to join 4,870 (+4) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
The AI Cheating Epidemic
That’s the headline of this article in First Things, a religiously-centered magazine. The piece is by Jeremy S. Adams, “an award-winning civics teacher and writer from Bakersfield, California.”
Here are the first two paragraphs:
Last spring, some of my graduating seniors felt obligated to take me aside before graduation, as if I were a naive child, and pronounce a dark truth in the era of widely available AI technology: You teachers can’t win. We will find a way to take the easy path. Every time.
At the graduation ceremony (which I did not attend), other students approached a colleague of mine, not in a spirit of guidance but of arrogance: We cheated the whole year, and you never caught us. They had escaped the noose of accountability. They weren’t relieved. Instead, they reveled, seeming to relish being scholastic frauds.
I wish I had encouraging words to add.
The next sentence:
We are in the era of pervasive, brazen student cheating.
Later, the piece asks an important question:
Journalists should be asking teachers a different question about student cheating. Not “Do they do it?” or “Is it becoming more widespread?” But: “Do your students feel bad about turning in work that is not their own? Do they feel any shame?”
I think I know this one.
The article continues:
Increasingly, alarmingly, the answer is simply no. When they get caught red-handed, they don’t blush. They certainly don’t apologize or offer any authentic excuses. There is no sense that what they are doing is immoral. At best it’s amoral. Their reaction to getting caught is best described as annoyance, the reaction one might have for being singled out and ticketed among a jaywalking mob.
One point for me.
Adams continues:
This is the more insidious harm embedded in the modus operandi of the American student. When a corrosive habit becomes so widespread that it feels normal, the few who resist are left not only disadvantaged, but resentful—punished, ironically, for their integrity.
This. All day.
Not the “insidious harm,” although I do agree. It’s the resentment of those who do the right thing that I want to underscore. To me, these are the most overlooked casualties of cheating.
When misconduct is widespread, brazen, and rewarded without consequence, those who follow the rules and put in the effort resent it — quickly and deeply. Why stand in line and pay the fare, when so many people are jumping the turnstiles right in front of the police? Seriously, what’s the point?
Also overlooked is that these “few who resist” don’t just resent the cheaters — they resent the entire system, including and especially those who are there to protect it, but do not. Quickly and inevitably, the entire exercise becomes a farce, for everyone. And, as our author says, I genuinely fear this is the road on which organized, credential-based education is driving — accelerating to irrelevance. Due to its unwillingness to enforce its own rules, to insist on protecting its own value.
But you’ve heard me say all this before. Adams continues:
Moral drift, normative desensitization—call it what you will—is now unfolding in our schools at such a scale and speed that it marks a titanic fork in the road for the future of American education.
More?
All this rhapsodizing about substantive learning is moot if our young people do not feel morally compelled to present their work as their own. The constant use of AI to perform the tasks they ought to perform for themselves makes a farce of the entire enterprise of education.
I wrote “farce” before I read that he had. I’m not going to change it. It’s the right word.
I also love this line:
Our students have forgotten that there is immeasurable value in the authenticity of failure.
It’s a strong piece, even if it is a little literally preachy — centered, as it is, on character and morality. Say what you like about that, it’s not wrong.
Australian Professor: AI is “Putting Australian Degrees in Peril”
A professor in Melbourne Australia, writing in The Australian (subscription required) under a pseudonym, says:
a spectre is haunting our classrooms; the spectre of artificial intelligence.
Before that, the author says they have:
been a frontline teaching academic at the University of Melbourne for nearly 15 years. I’ve taught close to 2000 students and marked countless assessments.
They continue:
Students are turning to AI to write their essays and it has become the new norm, even when its use has been restricted or prohibited.
The professor goes on:
While we know AI cheating is happening, we don’t know how bad it is and we have no concrete way of finding out. Our first line of defence, AI detection software, has lost the arms race and no longer is a deterrent. Recently, I asked ChatGPT to write an essay based on an upcoming assessment brief and uploaded it to Turnitin, our detection tool. It returned a 0 per cent AI score. This is hardly surprising because we already knew the tool wasn’t working as students have been gaming the system.
I cannot verify this, of course. But that’s not good. It’s possible that Turnitin has not caught up to the latest ChatGPT outputs. Or that it just missed. I can’t say.
Still, turning it off — as some schools have unbelievably done — is worse. Having a bad door lock is better than telling the world you have no lock whatsoever. Even if Turnitin is missing on straight AI detection frequently, which I doubt, not checking for AI guarantees a lack of accountability while also demonstrating a lack of concern. Raising the risk of detection, even if it’s minimally, matters.
Anyway, back to our anonymous professor, who writes:
Prosecuting a case of academic misconduct is becoming increasingly difficult. Many cases are dismissed at the first stage because the AI detector returns a low score that doesn’t satisfy the threshold set by management. The logic seems to be that we should go for the worst offenders and deal with the rest another way. Even with this approach, each semester the academic integrity team is investigating a record-breaking number of cases.
My hunch is that something is lost in translation somewhere. I’ve never heard of any “management” having AI score thresholds above or below which they will not act on a case of suspected misconduct. This feels like a bad idea. But again, I cannot verify this either.
And if this is true, this is news — at least to me:
To deal with the inundation of AI cheating, the University of Melbourne introduced a new process for “lower-risk” academic integrity issues. Lecturers were given discretionary powers to determine “poor academic practice”. Under this policy, essays that look as if they were written by AI but scored 0 per cent could be subject to grade revision. Problem solved, right? Not even close.
Tutors are our second line of defence. They are largely responsible for classroom teaching, mark assessments and flag suspicious papers. But a recent in-house survey found about half of tutors were “slightly” or “not at all” confident in identifying a paper written by AI. Others were only “marginally confident”. This is hardly their fault. They lack experience and, without proper training or detection tools, the university is demanding a lot from them.
Again, unverified. But about this, at least, I can ask. At first go, I think the discretion for “poor academic practice” makes sense. But expecting tutors to flag papers for suspected AI use — if that is what’s happening — does seem like a practice with plenty-o-problems.
Continuing:
We have an academic workforce that doesn’t know what it doesn’t know. Our defences are down and AI cheaters are walking through the gates on their way to earn degrees.
Soon we will see new cohorts of doctors, lawyers, engineers, teachers and policymakers graduating. When AI can ace assessments, employers and taxpayers have every right to question who was actually certified: the student or the machine?
Fair.
There’s also this:
The University of Melbourne is moving towards a model where at least 50 per cent of marks in a subject will have to come from assessments done in a secure way (such as supervised exams). The other 50 per cent will be open season for AI abuse.
Again — news to me. And while this is more a question of pedagogy than fraud, I’m not sure I love this policy, at least as far as it’s explained here. Questions, I got ‘em.
Continuing:
Australian universities have surrendered to the chatbots and effectively are permitting widespread contract cheating by another name. This seriously risks devaluing the purpose of a university degree. It jeopardises the reputation of Australian universities, our fourth largest export industry.
There is real danger that universities soon will become expensive credential factories for chatbots, run by other chatbots.
I don’t know what to tell you.
Passing this along as well:
What is to be done? The challenge of AI is not a uniquely Australian problem but it may require a uniquely Australian solution. First, universities should urgently abandon the integrated approach and redesign degrees that are genuinely AI-free. This may mean 100 per cent of marks are based on paper exams, debate, oral defences or tutorial activities. The essay, the staple of higher education for centuries, will have to return to the classroom or perish. Australian universities can then proudly advertise themselves as AI-free and encourage international and domestic talent to study here.
That’s one way to go.
And:
Second, as AI rips through the high school system, the tertiary sector should implement verifiable admission exams. We must ensure that those entering university have the skills required to undertake it.
AI aside, I feel as though this has been an issue for a long time. Still, hard to argue against something like verifiable admission exams.
So, there it is. Another professor is warning about the prevalence and consequences of AI-based cheating.
Adding it to the list.
TechCrunch: New AI, Anti-Cheating Math App Picks Up Users
TechCrunch is not known for its skepticism, or even curiosity, about business claims related to technology products.
So, take it as words on the page that TechCrunch reported that an AI-powered math app, one supposedly designed to resist outright cheating, is picking up steam. From the coverage:
As AI becomes more prevalent in the classroom — where students use it to complete assignments and teachers are uncertain about how to address it — an AI platform called MathGPT.ai launched last year with the goal of providing an “anti-cheating” tutor to college students and a teaching assistant to professors.
Following a successful pilot program at 30 colleges and universities in the U.S., MathGPT.ai is preparing to nearly double its availability this fall, with hundreds of instructors planning to incorporate the tool. Schools implementing MathGPT.ai in their classrooms include Penn State University, Tufts University, and Liberty University, among others.
Left the link in, should you want to check it out.
I’ll also contact them and see if I can get a demo. If I can, I’ll report back.
Continuing:
The most notable aspect of the platform is that its AI chatbot is trained to never directly give the answer, but instead ask students questions and provide support, much like a human tutor would.
Although:
It’s important to note that, like any chatbot, MathGPT.ai’s assistant still has the potential to produce inaccurate information. The chatbot has a disclosure at the bottom that warns the AI may make mistakes. Users can report the responses to the company if they believe the questions were answered incorrectly.
I’m not sure how a student would know the answers are wrong, especially when the whole point is that the bot is not really answering the question in the first place.
Anyway, good to see integrity-forward AI coming to market. We need more of it.