(376) Newsweek - I Just Can't
Plus, Huffington Post UK has a story on AI and cheating. Plus, summer break awaits.
Issue 376
Subscribe below to join 4,766 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
A Professor, an Example, my Exasperation
A few weeks ago, Newsweek ran an article from Annie K. Lamar, who is listed as, “Assistant Professor of Computational Classics and Linguistics, University of California, Santa Barbara.”
It’s another AI-evangelist piece dismissing cheating as irrelevant — a strain of dented thinking that’s embarrassingly common. For example, the headline is:
Higher Ed's AI Panic Is Missing the Point
Dismissing people’s concerns about deeply damaging things that are actually happening is incredibly arrogant. In this context, it’s done most often with condescending brush-off words such as “panic.” It’s the modern academic equivalent of, “just calm down, sweetheart.” Using “panic” is the way most people are doing that (see Issue 374), and it’s offensive.
You may read that and think I’m angry. Normally, I would be. But I’m just exhausted by it. It’s become so predictable, so tedious, so common, that my mindset is exasperation. Did I mention I’m taking a month off from writing The Cheat Sheet?
Our Newsweek author says that colleges and universities are facing a crisis:
uncertainty over whether and how students should be allowed to use generative AI.
I think that’s a little mis-framed. I think the questions and answers are considerably more intricate than a yes-or-no construction allows. But fine.
She continues:
In the absence of clear institutional guidance, individual instructors are left to make their own calls, leading to inconsistent expectations and student confusion.
That’s right. And I am sure that no one — most notably teachers — likes this resolution. But given that the answer on AI use in academic settings is complicated and varied, a single institutional policy probably would not fit.
It’s also a touch ironic that Lamar bemoans that professors are left to set their own policies but is eager to share hers. But before we get to her views, the second paragraph of the article is:
This policy vacuum has triggered a reactive response across higher education. Institutions are rolling out detection software, cracking down on AI use in syllabi, and encouraging faculty to read student work like forensic linguists. But the reality is that we cannot reliably detect AI writing. And if we're being honest, we never could detect effort, authorship, or intent with any precision in the first place.
I deeply wish schools were rolling out detection software and cracking down. From my vantage point, a considerable number of schools are twisting themselves into obscene knots to avoid doing anything close to that. Many are downright bragging that they’re not even checking for AI use — appropriate or otherwise (see Issue 296 for one example). But whatever, if you’re all-in on AI, you may see something different than I do.
Above, I left the link in so you could check that our author, in saying that “we cannot reliably detect AI writing,” links to an article from 2023, which is the first problem. In AI and AI detection, two years ago is 20 years ago.
But I kid you not, the second paragraph of that 2023 article says:
Turnitin’s detection tool successfully spotted the AI-generated text, returning a result of 100 per cent AI-generated content. ChatGPT was asked to rewrite the movie critique to make it “more humanlike”. The resulting output still did not trick the Turnitin detection tool.
I give up.
Lamar lamented that teachers were being asked to read student work “like forensic linguists.” Maybe she should have read the article she linked to — like, at all. Maybe anyone at Newsweek should have. I will never understand how any rational human can take a source that includes “returning a result of 100 per cent AI-generated content” and describe it as unreliable.
I really, really don’t want to go too far down this dead-end. But the article Lamar linked to did find that some of the text that was tested returned mixed detection results. But only when that text was itself mixed — either by AI-assisted rewriting, hand editing to include intentional errors, or starting with human writing and asking AI to spruce it up. That’s what you’d expect when a text is neither human nor AI, that a good detector says the sample is mixed, because it is.
In one example from the cited article, the authors asked GPT to rewrite a movie review in the style of a 14-year old:
“This flick is the bomb-diggity. You won’t want to miss it, bro!” – Turnitin was unable to detect the AI content, returning a 0 per cent AI content in the final test.
If you’re a college professor, say, of linguistics, to pick a random example, and a student turns in bomb-diggity bro and you’re worried about the AI detector, I think you missed the point. Moreover, this was a test of two years ago. Across the board, AI detection is better now.
Also for the record, the cited article tested a few other junk detection systems and pointed out that the tests yielded “wildly varying results.” No kidding. But even here — even here! — the article Lamar cited to say that detection was unreliable says of those “freely available online AI-detection tools:”
Some tools proved reasonably reliable.
Come on. This is not a serious conversation we’re having.
And I’m not sorry for this either. Up top, Lamar wrote, “And if we're being honest, we never could detect effort, authorship, or intent with any precision in the first place.”
There are several tools available that track the process of writing, including authorship and how and where AI was used.
But most importantly, if you cannot detect when your students are giving honest effort — I question whether you should be in a classroom. If education pivots on the friction of learning, and you cannot detect effort, what would you say your role is?
I know I should not, but I’m going to go one more. If your primary function as an educator is assigning reading and grading final product, AI can do that. Faster, cheaper, and pretty accurately. Just go home. Teaching has to be about more than that.
If you’re reading this and you think I’m wrong, tell me. Better yet, write something on it and send it in, for when I’m on my summer break.
Here, you’re going to think I set this up. But on my honor, I did not. After her lines about unreliable detection and the inability to detect effort and intent, Lamar writes:
That's why I've stopped trying. In my classroom, it doesn't matter whether a student used ChatGPT, the campus library, or help from a roommate. My policy is simple: You, the author, are responsible for everything you submit.
She has stopped trying. It doesn’t matter.
I am so shocked.
And I guess UC Santa Barbara is cool with grading and giving out degrees based on what a student’s roommate knows. Good to know.
Lest you think I have mischaracterized what our author is saying, she continues:
That's not the same as insisting on authorial originality, some imagined notion that students should produce prose entirely on their own, in a vacuum, untouched by outside influence. Instead, I teach authorial responsibility. You are responsible for ensuring that your work isn't plagiarized, for knowing what your sources are, and for the quality, accuracy, and ethics of the writing you turn in, no matter what tools you used to produce it.
No, it’s not the same thing as originality — as in some measure of what a student knows how to do, or what they themselves understand. You know, learning. You may have heard it mentioned a time or two.
Instead, Lamar believes we should grade and grant education credentials based entirely on what a student makes, regardless of how and where they got it or what they may understand about it. “No matter what tools you used to produce it,” she says. The idea that assessment should be based on what a student can demonstrate on their own, she calls an “imagined notion.”
Not too much later, Lamar says, citing a New York Times article:
we cannot grade effort; we can only grade outcome.
This has always been true, but AI has made it undeniable.
And:
Our ability to detect effort has always been flawed. Now, it's virtually meaningless.
All that matters is product. The learning, the process, the effort — all irrelevant. If there’s a better ad for AI, I can’t imagine it.
If you actually believe this, I see why it’s easy to dismiss cheating. If all that matters is outcome, it’s not cheating. It probably deserves the A. Lamar literally says:
That's why it doesn't matter if students use AI.
And:
We should be teaching students how to write with AI, not how to hide from it.
We knew that was her position long ago, from the headline use of “panic.” All that’s missing is the tag “sponsored by OpenAI.”
And fine — in addition to being exhausted, I am also a little angry. Especially about garbage like this:
There are already cracks in the AI-detection fantasy. Tools like GPTZero and Turnitin's AI checker routinely wrongly accuse multilingual students, disabled students, and those who write in non-standard dialects. In these systems, the less a student "sounds like a college student," the more likely they are to be accused of cheating. Meanwhile, many students, especially those who are first-generation, disabled, or from under-resourced schools, use AI tools to fill in gaps that the institution itself has failed to address. What looks like dishonesty is often an attempt to catch up.
Insisting on originality as a condition of academic integrity also ignores how students actually write. The myth of the lone writer drafting in isolation has always been a fiction. Students draw from templates, search engines, notes from peers, and yes, now from generative AI. If we treat all of these as violations, we risk criminalizing the ordinary practices of learning.
I don’t know how many times I have to point out that this is untrue nonsense (see Issue 216). However many times that is, it obviously won’t stop the AI-profits (intentional misspelling) from repeating it. Anything to get you to stop checking for AI misuse.
I am drained by having to repeat, once again, that AI checkers do not accuse students. That AI detection is not a “fantasy.” If it looks like dishonesty, it does not matter if the intent was to “catch up.”
By the way, I remember reading somewhere that we cannot really detect intent anyway, so I am not sure that their intent matters. Unless it’s to condone the use of AI — then intent matters.
And “insisting on originality as a condition of academic integrity” is wrong, I guess. The idea that you should be assessed based on your own actual work — nutty.
I am done. I seriously cannot take this anymore.
“I Love Normal Cheaters Now”
Huffington Post UK has a story about AI and cheating, and, for Professor Lamar of UCSB, I’m going to start with the story’s subheadline:
"It essentially outsources the learning process for students."
The top headline is:
'I Love Normal Cheaters Now' – Professors Share How AI Is Changing Student Assessment
The article opens:
A couple of months ago, Dr Jonathan Fine – a lecturer in German Studies – shared an X post that made me laugh, then wince.
“I love normal cheaters now,” the academic wrote. “A student admitted to getting help from a person on an assignment, and I didn’t even penalise him because I was just so happy it wasn’t AI.”
Laugh, then wince feels right.
The article quotes Dr. Fine:
he says, “I don’t allow students to use AI, and I tell them at the beginning of class how awkward the conversation is when they’re caught, but they use it anyway.”
It continues:
He assigns a lot of in-class writing, which means it’s clear when AI has been used by a student. The lecturer says it reads very differently from their usual work.
“When I catch a student using AI, I try to use it as a teaching moment,” he told HuffPost UK.
“I talk to the students about how my job is to help them improve, but I really can’t help a computer. If students were to reoffend, then I’d have to escalate the situation as a violation of academic policies.”
I really can’t help a computer, is a line I like.
The piece also quotes Dr. Steven Buckley, a lecturer in Media Digital Sociology at City St. George’s, University of London:
who stressed he was speaking from a purely personal perspective, said that he’s seen the use of AI grow “rapidly” in the past two to three years.
Though his experience with AI has only been in the humanities (Dr Buckley said it may be different for STEM subjects), he says AI “has forced me and many of my colleagues to reconsider how we evaluate what students have learnt in our modules and certainly change the type of assessments we use.”
More from Buckley:
“I personally consider the use of AI to be a huge problem as it essentially outsources the learning process for students,” he added (a recent MIT study suggested that those who use generative AI may lose critical thinking skills).
“These days, many students simply do not do any of the reading of the actual academic material and instead rely on things like ChatGPT or Notebook LM to provide a summary. These summaries are often at best very shallow and at worst are totally incorrect.”
Nonetheless, he said, some students accept these summaries uncritically.
Of all the cases of potential academic misconduct flagged to him, he says, “at least 70% of cases involve the use of generative AI in some form.”
No comment.
The piece continues:
His university does not use AI detectors when checking student work, as they can be inaccurate.
Still, no comment.
OK, one comment. They can be inaccurate. But a school should only use one — preferably a good one. It should be incredibly accurate.
When reviewing written work for suspected unauthorized AI use, Buckley says:
He looks for red flags like hallucinated references. Sometimes when asked about their paper, students suspected of cheating using AI “do not know or understand” the theories and arguments in their essays.
″Plus,” the lecturer added, “they often read like shit.”
Class Note, Summer Break
This is probably the penultimate reminder that I shan’t be writing The Cheat Sheet between July 15 and August 15. I’ll write the Issue on Tuesday, July 15 and again on Tuesday, August 18, picking up the regular cadence thereafter.
In case it is not clear, I need a break.
In that time, I’m turning this space over to you. I’ll organize and publish issues, but this is your opportunity to fill them with whatever integrity-related ideas or items you’d like.
You can reply to the e-mail newsletter, or e-mail me directly at Derek (at) NovemberGroup (dot) net
Seriously, whatever you want. Within reason. The floor will be open.