341: A Critic of AI Detection Changes Course
Plus, Clemson, Albany State and Chegg. Plus, OpenAI tells students to give up, AI is inevitable.
Issue 341
Subscribe below to join 4,324 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 46 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month or $80 a year, and corporate or institutional subscriptions are $240 a year. Thank you!
Critic of AI Detection Changes Her Mind, Will Use It in Her Classes
In a strong LinkedIn post, Anna Mills, a professor, education observer and analyst, and long-time critic of AI detection in education, says she has changed her mind and will use the technology in her classes.
Mills starts her post with this:
I argued against use of AI detection in college classrooms for two years, but my perspective has shifted.
Good.
From the very onset of generative AI, as well as the tools that can detect it, I’ve tried to be respectful of AI detection critics, even though evidence never supported their position. Accordingly, it’s really nice to see a critic examine the evidence and adjust their words and actions. I think we can all agree that, regardless of the topic, it does not happen often.
She continues:
I ran into the limits of my current approaches last semester, when a first-year writing student persisted in submitting work that was clearly not his own, presenting document history that showed him typing the work (maybe he. typed it and maybe he used an autotyper). He only admitted to the AI use and apologized for wasting my time when he realized that I was not going to give him credit and that if he initiated an appeals process, the college would run his writing through detection software.
Exactly. I love that this is a real-world example of what happens when we alter the student risk/reward calculus — the very possibility of outside detection triggered an admission. It’s possible therefore, that, had detection been a real possibility all along, this student would not have taken such shortcuts in the first place.
Mills goes on:
I'm also influenced by recent research that suggests detection is likely not biased against English language learners after all, that educators are not as good as we think we are at distinguishing AI from student writing on our own, and that some detection systems are pretty accurate when it comes to naive copy/paste AI use.
As the kids say, I feel seen.
I cannot say whether AI detection is biased against non-native speakers, though I have said that the research supporting such a position is complete junk (see Issue 216 or Issue 251). I have also often pointed out that teachers are really quite bad at spotting AI text, even when they are told to look for it (see Issue 325 or Issue 332). And I’ve said — too often for some, I am sure — that some AI detectors are quite accurate, as the research repeatedly shows (see Issue 250).
But this is not about me, I swear.
Mills says further:
As I teach composition online asychronously this semester, I'm incorporating Turnitin alongside process tracking, writing process assignments, social annotation, lots of student choice, peer review and tutoring, video assignments, and clear messages about the purpose of each activity and the value of the writing process.
First, that’s a ton of work. But it’s a very solid approach and highlights that AI detection can play a key role in the learning process.
Continuing:
What about the risk that AI detection will lead to false accusations? It's real, and I let students know I'm aware that detection yields some false positives. I will never trust AI detection as firm evidence, and I am not punishing students.
This is entirely reasonable and is, I think, where most people are. As I and many others have said often, a score from an AI detector should never be the sole answer to questions of integrity or authenticity. It should be part of the picture, not the picture.
The most important part of Mills’s reconsideration and writing here may be this:
In most human endeavors, some accountability structures are important even when we design for intrinsic motivation.
I cannot, and did not, say it better. Systems without accountability invite human temptations.
I highlight this too, from Mills:
I know so many colleagues I respect are highly critical of AI detection, seeing it as signaling antagonism toward students. Christopher Ostro clarifies that his purpose is not to punish students but to provide some accountability that, in the end, encourages learning and shows that we care.
The link between using accountability tools and showing that a teacher cares is deeply fundamental and overlooked. The research has been there for a long time — if students think teachers don’t care enough to protect the process and evaluation, students won’t care enough to do it honestly (see Issue 13).
If you don’t lock your door, in other words, people assume that you don’t really care about what’s inside. And if you don’t care, no one else will.
To conclude, good for Mills for coming to a place where AI detection tools fit her teaching approach. I think they are essential to creating a culture of integrity and fairness. And I have never understood why any educator would actively want less information about student work. Hopefully, more teachers will move in that direction.
On Chegg, Clemson, and Albany State University
In Issue 339, we shared that Chegg had somehow convinced Albany State University (GA) and Clemson University (SC) to host on-campus events with company representatives as part of an effort around student mental health.
Last week I wrote to both schools asking if they wanted to comment on their collaboration with Chegg, which, since you read The Cheat Sheet, you know is the probably the largest cheating provider on the planet.
Clemson did not respond — no surprise. Albany State did reply. But it was a reply, not a response, in that it did not address the issue in any way. The school simply said how important student mental health is, without addressing Chegg or integrity. I followed up. They did not respond. Obviously, that’s disappointing.
The exchange with Albany State spurred me to look up the school’s academic integrity policy, mostly to see if it named Chegg. Some policies do. All should.
Albany State’s honor policy does not name Chegg specifically. But there are other issues with it. For one, it appears it has not been updated since 2018, before generative AI was a thing. That’s not great. I asked the school if that was true, that the policy has not been updated. They did not respond.
Worse, the policy has grammar errors or typos. For example:
To remind student of their responsibility to uphold the Academic Honor Code, the following statement will be included in each course syllabus – “It is understood that all students are required to abide by the Albany State University Academic Honor Code as stated in the Student Code of Conduct.”
To remind student.
Come on. It’s a pretty paper-thin policy, that this sentence being in a syllabus will accomplish anything. Not to mention that the phrasing is odd. But that it says “to remind student” and no one noticed or corrected it, probably since 2018, says exactly how much the school is focused on integrity. I mean, can we at least pretend to take this seriously?
It also says:
Any allegations of violation of academic integrity which is referred to the formal hearing process will be heard by the Academic Honor Code Committee
Any allegations which is. I mean…
You think I’m being too picky.
How about this, from the policy’s definition of “Academic Dishonesty”:
Academic Dishonest includes, but is not limited to cheating, plagiarism, and fabrication.
Yes, academic dishonest. In the definition. It’s not even funny.
But it makes it impossible to believe that the school takes this issue seriously when their written policy includes such obvious, uncorrected errors. And that’s before they welcomed Chegg on campus.
If you were a student who happened to find your way to the “Academic Honor Code” at Albany State University, and you saw that it said “academic dishonest,” what would you think?
OpenAI Says, Essentially, That Students Should Just Surrender to AI
Business Insider has a story about executives at OpenAI (ChatGPT), meeting with students in Tokyo.
I first saw the article in Joseph Thibault’s LinkedIn, and am sharing it as a creepy milepost for where we really are in this hyper hype cycle of AI, especially in education.
The message is dystopian: you cannot resist, give up, embrace it.
These are the article’s first two sentences:
Sam Altman says it isn't a matter of if AI is going to outpace humans, but when.
"You will not outrun the AI on raw horsepower," Altman, the CEO of OpenAI, said in a Q&A session alongside OpenAI's chief product officer, Kevin Weil, earlier this month with students at the University of Tokyo. "That's over. That's probably over this year."
The italic is original to the story.
Nice, right?
Weil, who is mentioned in the story:
said the sooner students start integrating AI into their daily lives, the better prepared they'll be when it crops up in future professions.
I highly doubt that. If AI use is ubiquitous, non-AI skills will be what you want, what everybody wants. Even generative AI company Anthropic/Claude tells people not to use AI when applying to work there (see Issue 338) because they want to hear from you, not their bot.
But that’s for a different newsletter.
Weil goes on:
"To me, the lesson in there, the thing to take away now, is just start using these tools," the CPO said. "Start incorporating them into the way that you work, into the way that you study. When you're doing something new, ask yourself, 'Is there a way that AI could help me do this faster?'"
Just start using them. Start. Go on.
I don’t want to repeat all of it, you should check the article out. But the tone is very defeatist, with things such as:
trying to best AI in terms of pure skill was like trying to "outrun the calculator" at arithmetic
Honestly, it turns my stomach.
Continuing:
Altman recommended that the students in attendance develop new skills to help them leverage AI to their advantage.
"The skills that you need in that world are figuring out what people want, sort of creative vision, quick adaptability, resilience as everything is changing around you, and the sort of learning how to work with these tools to do way more than people could without it," Altman said.
When students do enter the workforce, Weil said, they could benefit from keeping AI in mind. The best-positioned companies are those that see the technology as a potential boost rather than a competitor, he added.
Let’s be clear, since Altman is giving learning advice to students, that “figuring out what people want,” and “creative vision,” and “adaptability,” and “resilience,” are not new skills. They have been around forever. Most of those are bundled up in what we used to call the entrepreneurial mindset.
As for “learning how to work with these tools” — that’s not new either. Whatever tools were new in your lifetime, you had to learn to work with them. Or find a career in which they aren’t required.
But I’m off topic again. In the arena of integrity and authenticity, it was Business Insider after all, that, back in 2023, wrote that “ChatGPT is mainly a tool for cheating on homework” (see Issue 248) and linked high usage of ChatGPT to college exam periods.
While we are here, remember that OpenAI and Chegg announced a partnership (see Issue 203). And that OpenAI has the technology to watermark their text, making it easy to detect. But the company has chosen not to release it (see Issue 308) because it would hurt their business, which puts OpenAI in the business of deception.
OpenAI functions like a cheating company in that it allows students to astroturf academic products without putting in the academic work. It’s steroids — easy performance, zero effort.
As a thought experiment, replace the words “AI” or “ChatGPT” with steroids and see how it flows:
Altman recommended that the students in attendance develop new skills to help them leverage steroids to their advantage
it isn't a matter of if steroids are going to outpace humans, but when
You will not outrun steroids on raw horsepower
To me, the lesson in there, the thing to take away now, is just start using steroids. Start incorporating them into the way that you work, into the way that you study. When you're doing something new, ask yourself, 'Is there a way that steroids could help me do this faster?'
trying to best steroids in terms of pure skill was like trying to "outrun the calculator" at arithmetic
Kinda fits, doesn’t it?
My main gripe with the message from Open AI execs to students is that they've downplayed the value of learning. I don't dispute the power of AI but they have a megaphone right now that could be used to help students understand that teachers are working to get them to use their minds, not just setting up an obstacle course to showcase the agility of a super powered AI...