Student: "You Have No Idea How Much We're Using ChatGPT"
Plus, a professor at Texas A&M goofs. The Washington Post goofs more. Plus, 200 barred from future exams in India.
To join the 3,280 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 13 amazing people who are chipping in a few bucks a month via Patreon. Or joining the 13 outstanding citizens who are now paid subscribers. Thank you!
Student: Academic Integrity Policies are “Laughably Naïve”
A student at Columbia University thought it a good idea to write an article for The Chronicle of Higher Education about students, original work, and ChatGTP.
There’s a bit to get to here but before we do, good work again by The Chronicle for giving space to the important issues of academic integrity. They, along with Times Higher Ed, seem to be the only education publications to take it seriously. I am glad they do.
Back to our student, there are a few big bullet points to share and unpack. The first is the headline to his piece, which he may not have written, but nonetheless says:
I’m a Student. You Have No Idea How Much We’re Using ChatGPT.
It’s attention-grabbing. And I know I’m not strictly the audience but I think I do know how much students are using ChatGTP. I think I’ve written about it and shared it often. The answer, both the student and I agree, is - more than any educator or school official would like to imagine. So, he’s right.
The subhead of the article, however, is:
No software or professor could ever pick up on it.
Here, I part company with our student writer, or at least with his headline writer.
That’s because, in the clear meaning of the sentence, that’s untrue. Software and professors do “pick up on” using ChatGPT - all the time. But as our student actually means it, as he writes in the body of his article, he is at least possibly right.
What he describes is using ChatGPT to outline papers, suggest topics, sustain arguments. Then, using ChatGPT’s input and guidance, the student writes the paper. Is that “using ChatGPT”? Yes. Is it submitting GPT text as your own? No. Is it cheating? That depends on what the professor expects from the course or assignment. Would such an approach bypass GPT-detection? Very likely. Will it fool professors? Maybe. Maybe not.
As an opening, the student writes:
Look at any student academic-integrity policy, and you’ll find the same message: Submit work that reflects your own thinking or face discipline. A year ago, this was just about the most common-sense rule on Earth. Today, it’s laughably naïve.
And here, I have a bevy of issues. Yes, a bevy.
My first is the idea that it’s “naïve” to want or expect that academic work reflect a student’s thinking. If what is being produced is not a student’s thinking, what’s the point? Grading what a computer “thinks” is a key argument from the Iliad (the example cited by the writer), is entirely pointless. Who cares if a computer can search through millions of lines of text and predict the order of words so they sound nice?
Should a student receive a grade, or credit, or a degree based on recycling arguments made by machines? I certainly think not. I mean, if you’re not doing your own thinking, why are you in college? I’ll go further, if you actually need a machine to find and defend a central theme in a piece of writing, maybe you should not be in college. Your seat may be put to better use by someone else - especially at Columbia.
Further, to suggest that policies - and by extension, schools - which expect genuine self-thought are “laughably naïve” is a call to war. If that’s true at all, I ask again - what’s the point of a degree? What’s the point of college at all?
But it does seem as though this is the point our wayward academic is trying to make as he writes later:
In reality, it’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own. Once that becomes clear, it follows that massive structural change will be needed if our colleges are going to keep training students to think critically.
Note the phrase “work that looks like your own.”
Though, he is right that, if AI can fake your thinking, schools will have to reconsider how they teach thinking. Or do something about the faking.
Here’s one idea - don’t let students use AI to do their thinking. I know, crazy.
Still, this idea of necessary change is based on the theory that AI thinking with student writing is utterly undetectable, which is not true. Engaged professors know what students know. They know what arguments and critiques are informed or regenerative. They know what was - and what was not - taught in class or in the reading. Sidestepping the work and letting AI do your thinking is no foil to engaged teaching.
Moreover, even if this type of AI trickery were undetectable, the solutions aren’t complicated. Even our author suggests:
If education systems are to continue teaching students how to think, they need to move away from the take-home essay as a means of doing this, and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.
Setting aside the “some new style of schoolwork” - whatever that means - in-class writing or follow-up conversations would quickly and easily unmask AI-users who think they’re being savvy.
I hate the analogy because it’s bogus, but consider the calculator. Sure, it can do the math for you, but when it’s your turn to work the problem at the proverbial chalkboard, knowing the math actually matters.
There’s so much more to say about this but I’ll wrap up where our students writes:
The problem isn’t with a lack of AI-catching technology — even if we could definitively tell whether any given word was produced by ChatGPT, we still couldn’t prevent cheating.
Of all of the most absurd anti-integrity arguments, this one is my favorite. We can’t stop cheating, so why bother? Silent alarms don’t stop bank robberies, so no point in having them. Speeding tickets don’t stop speeding, so let’s not do that either.
Seriously, if this is your argument - get a new argument.
Texas A&M, GPT, and the WaPo’s Weird Coverage of It
The Washington Post ran a story the other day about a class at Texas A&M at Commerce, ChatGPT, and cheating. It’s the WaPo and it’s cheating, so you should know about it.
It’s really two stories - academic and journalistic.
The academic one is simple. On the eve of graduation, a professor notified his entire class that they were receiving grades of incomplete due to what he believed was use of AI writing in their assignments, ChatGPT in particular. This potentially impacted graduation requirements.
The professor made several errors. One, according to the reporting, he checked the student work on ChatGPT - asking ChatGPT if the papers were ChatGTP. That’s not going to work. One, ChatGPT is not built for that, and, two, their own system that’s supposed to be able to detect AI-created content is flatly awful. It is, by far, the worst one in the market. What the professor did is the home improvement version of asking a carpenter to fix your toilet.
Second, our professor used the scores to make misconduct accusations. That’s always a bad idea. Even scores from the good AI-detection systems should not be used this way. They are flags, not jury verdicts. They should inform a decision, not be it.
As a result, the professor’s net was not cut well and thrown wildly. That is user error. And a bad case of it.
Nonetheless, the WaPo reported that at least one student admitted to using AI on their assignment, in violation of class or school policy. And the professor did offer students a chance to submit new assignments, which, the WaPo notes, several students did.
On the journalism side, the story is awful. For one, the Post headline is:
A prof falsely accused his class of using ChatGPT. Their diplomas are in jeopardy.
It’s not clear at all that the accusations were false. Some, probably - he accused his whole class. But in at least one case, the accusation was true. That alone should negate the headline that he “falsely accused his class.”
Worse, the subheadline is:
AI-generated writing is almost impossible to detect and tensions erupting at a Texas university expose the difficulties facing educators
That’s just a factual error. AI-generated text is not “almost impossible to detect.” Somewhere on the order of a dozen companies are scanning and detecting millions of cases a day. That’s not just wrong, it’s just invented.
Worse still, the Post story quotes Ian Linkletter as some kind of expert arbiter on AI-detecting. They never mention his long and lurid legal history of litigating with proctoring providers (see Issue 206) - which on its own should have disqualified him as a source in any respectable piece of journalism. But if WaPo wants to quote him, it’s inappropriate to not identify him as a frequent and hostile critic of academic integrity providers.
And, of course, Linkletter is wrong on issues of verifiable fact - though the Post doesn’t bother to be correct. As an example of why this is a problem, Linkletter, who as far as I know has zero formal training in AI or language models or even academic integrity, nonetheless tells Post readers:
He says AI detection will have a hard time keeping pace with the advances in large language models.
I genuinely don’t know what the Washington Post was thinking.
Indian State Disbars 200 Examinees for Cheating
I’m way behind in my international coverage and it’s stacking up - sorry.
From April, an early inquiry to using “unfair means” during a government recruitment exam in India has put about 200 people on a black list - barring them from taking future government exams. Authorities said the number “could go up.”
Noteworthy is that police were leading the investigation and had compiled the list.
Despite state action and arrests, India continues to struggle with frequent and widespread cheating in its exams - at all levels of academia and government. Most countries do, the United States included. Except for the bits about state action. And the arrests. And the press coverage.
Otherwise, exactly the same.
There may or may not be an Issue of The Cheat Sheet this coming Tuesday, May 23. But there definitely will not be one on Thursday, the 25th. Regular Tuesday and Thursday dispatches will resume thereafter.
As always, if you have comments or tips or notes of your own, a reply e-mail reaches me. Thank you for reading.