(371) Inside Higher Ed On Handwritten Assignments to Counter AI Cheating
Plus, Time. Plus, NY-Who-Dis?
Issue 371
Subscribe below to join 4,743 (+11) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
Inside Higher Ed Looks at In-class Writing and Handwritten Work. We Look at Inside Higher Ed.
Inside Higher Ed (IHE) has published a story about cheating. Well, kinda.
IHE History
Its very existence merits a mention since IHE usually ignores the topic.
When they do cover it, their compromised relationship with cheating usually surfaces — an editorial or philosophical bias that dismisses cheating, treats cheating mitigation with hostility, and even occasionally simply misrepresents things. This consistent pattern is not aided by the fact that IHE has routinely and repeatedly run ads for cheating companies such as Chegg, Course Hero and Grammarly.
If you’re interested, you can track this pattern by starting with Issue 353 and working back. Or, lifting from my previous coverage in Issue 326:
I mean, IHE does it so often that it’s become cliche — see Issue 102, Issue 159, Issue 95, Issue 49, Issue 274, Issue 316, Issue 311, Issue 195, or Issue 167. And that’s not half of the examples.
And like the pattern before it, this new article does not disappoint.
That’s not to say the article is not interesting and worth a read, it is. And I’ll share a bit from it in a moment.
But to stay on IHE for now, it’s interesting that the URL for the story is: https://www.insidehighered.com/news/faculty-issues/curriculum/2025/06/17/amid-ai-plagiarism-more-professors-turn-handwritten-work
“Amid AI plagiarism” clearly starts the title. Yet, when IHE actually publishes the story, the headline is:
The Handwriting Revolution
Five semesters after ChatGPT changed education forever, some professors are taking their classes back to the pre-internet era.
Poof. AI plagiarism is gone. Now, ChatGPT has quite neutrally “changed education.” It’s not good or bad now, just changed.
Here’s how many times the story uses the word plagiarism: zero. Uses of misconduct: zero. Uses of “integrity,” outside identifying someone who works in the field: none. And here’s how many times the story uses the word “cheat” or “cheating,” outside of quoting someone: once.
And you may say — OK, that’s OK. But here’s how that word gets used the one and only time:
As the technology has grown more ubiquitous, faculty have found it increasingly difficult to combat the rise of AI cheating; it’s hard to counter the argument that students will likely use AI in their future careers so professors should be leaning into ChatGPT—teaching how to use the tool, or at least allowing their students to experiment with its use in assignments—rather than shying away.
Let’s begin by marveling that this is all one sentence. And such an odd sentence, if you really look at how it’s put together. Even though they are connected by a semicolon, the two thoughts aren’t connected at all. So much so that I’d believe that the section after that semicolon was stuck on after the fact. It just doesn’t fit. And it’s really weird.
In this sentence, IHE gives us two things, sandwiched around the only mention of cheating. One, is that it’s “increasingly difficult to combat.” Fine. But the other is that somehow it’s hard to argue against using ChatGPT in school because, jobs. In fact, IHE declares that “it’s hard to counter the argument” and “professors should be leaning into ChatGPT,” instead of “shying away.”
IHE’s only mention of cheating is an ad for using ChatGPT.
And, by the way, it’s not hard at all to argue against the idea that students will use ChatGPT so schools ought to just “lean in.” I’ve done it. But we don’t even get this argument. IHE just says, essentially, that this argument is over. When it comes to cheating, it’s difficult to argue against leaning in, IHE says.
Not to mention, this is not what this article is about. It feels to me as though someone at IHE said, we have an article on cheating, we have to tell people not to worry about it. So, in went the jobs/lean in nonsense — just plopped in there with a semicolon. Also, the second bit of this sentence is argument and editorial, not reporting, in the middle of a factual, reported article.
Something is very strange.
Also very strange is this, from this story, which is about ways educators are addressing AI plagiarism:
At the same time, others have raised concerns about the return to handwriting. How do you accommodate students with disabilities that require them to use technology? How do you account for the fact that handwriting takes longer than typing? What if students refuse to handwrite assignments—or express their displeasure with the requirement in course evaluations? Most ominously, what if the technique doesn’t fix the problem and students simply transcribe responses produced by ChatGPT?
Others have raised concerns. About a way to prevent cheating. Because it would not be an IHE article unless preventing cheating was a problem.
These “others” are not mentioned. Or quoted. And whomever these mystery people are, they “raised concerns.” This is not journalism.
Moreover, what? Accommodations for disabilities is a concern? Did IHE really just say that? Accommodations are not only common, they way predate ChatGPT or even laptops. Accommodations were a thing when handwritten work was the only option.
Accommodations are also — wait for it — legally required. What do you mean, “how do you accommodate students with disabilities?” Seriously?
I mean, this is so obvious that later in this same article we get:
Those who need more time or other accommodations on their essays are allowed to take them in the disability services office with a proctor.
And how do you account that it takes longer? I can name nine ways to account for that. What if they refuse? Seriously? IHE is asking what happens when a student refuses to take an assessment.
What if they are upset and leave bad reviews? Again, are we serious? Let me think — bad review versus cheating. Tough one.
What if it does not solve the problem? Come again? Like no one has ever heard of a TA or a test center. This can’t be serious. Oh, it’s not. It’s Inside Higher Ed.
I don’t want to be a total conspiracy person, but this section feels stuck in after the fact too. It’s unsourced and asks questions the article itself answers. In other words, IHE reports on a really positive solution to cheating but throws in all these silly, unrelated, unsourced, even already answered questions about it. Again, very strange.
Even if these were not editorial additions to the core story, IHE has taken this cheating-is-not-a-problem stance so often, for so long, it’s impossible to believe this framing is an accident.
I confess, I do not understand why IHE is this way. But they are.
The Actual Article
With all that aside, the article is good. And of interest.
It starts:
In Melissa Ryckman’s world history survey, an introductory class that consists mostly of non–history majors, she asks her students to complete a brief 100-word assignment every Friday based on what they learned over the previous week. The questions are not based on rote memorization but rather ask students to think critically about the material, exploring, for example, whether they would rather be hunter-gatherers or farmers. It’s an attempt to get students both to engage with the lessons—and to avoid ChatGPT.
Ryckman, an associate professor at the University of Tennessee Southern, said she figured students might actually want to share their own opinions rather than rely on the generative AI tool that has become the bane of many educators. But she found that students still submitted AI-written answers.
So, students are outsourcing a 100-word personal reflection assignment. Literally not one single person is surprised.
It continues:
Ryckman is one of many professors who are weighing a shift back to handwritten assignments in the hopes of preventing students from copying and pasting their work from ChatGPT and other generative AI tools, which students are increasingly using to complete their schoolwork.
I also like that the story quotes an actual expert in integrity, Tricia Bertram Gallant, who:
stressed the importance of finding secure solutions for evaluating student performance as ChatGPT’s popularity rises—including handwritten exams and assignments.
“We’re not just in the business of facilitating learning; we’re in the business of certifying learning. There has to be secure assessments to say to the world, ‘This is a person who has the knowledge and abilities we are promising they have,’” she said.
As usual, she nailed it.
Let me also say that I just love this reporting:
“We’re sometimes pushed to incorporate high-tech tools in our classroom, and this can be difficult to resist, especially if a professor is not tenured or they’re an adjunct,” said Sara Gallagher, a professor at Durham College in Ontario, Canada. “In my college that I’m teaching in right now, we had a whole seminar that was mandatory that we attended that was very pro generative AI, and I remember thinking after the session, ‘There’s no way for me as a professor to tell if someone’s cheating,’ because [the seminar] really didn’t answer that question.”
A college made educators go learn about AI, pushing the technology. But it did not address cheating. I’m horrified. And not the least surprised. All you have to do is lean in, amiright?
I think this too is important reporting:
But many of the professors now relying on handwritten work—much of it completed in class—say they’ve seen significant changes, not only in the quality of the students’ work, but also in their attitudes toward their education more broadly.
Monica Sain, who teaches English at Mission College, a community college in California, first began inching away from tech in her classroom shortly after ChatGPT came on the scene in late 2022. She soon asked students in her English composition courses to begin highlighting and annotating the assigned readings, just to prove they weren’t relying on ChatGPT summaries. Later, when she noticed students asking AI for help during in-class discussions, she prohibited laptops and phones from being used in her classroom. She calls it a “digital detox.”
Is it just me, or does that feel important?
There’s more to the experience of Professor Sain, which I encourage you to jump over and read.
Gallagher, the professor in Canada, also says:
“When students are using ChatGPT and using a lot of digital tools like ChatGPT to do the work, they become disengaged from the work and they just stop caring—which, I kind of can’t blame them,” she said. “One thing I’ve noticed [since using in-class or handwritten assignments] is that that engagement is back. The wanting to ask questions, the wanting to learn, even coming up to me after class and having a discussion about what we talked about. That’s something that disappeared in my classroom for up to a year, and I was worried—is it something I’m doing?”
Again, good reporting. And important.
There’s more:
But the handwritten work seems to have made an outsize impact. Not only are students more engaged with the material, but Gallagher also said they are talking and connecting with each other more than any of her other classes over the past several years
This feels great.
Even though parts of it are odd and have a strange pro-cheating bias, there’s much to like in this article. I suggest giving it a read.
A Moment of (Your) Time
Time, the venerable nameplate of more than a century of American media, ran a piece recently on the challenge writers face in this era of AI-generated whatever-it-is, and why the success and survival of human writing is important.
This is from it:
Well, I’m here to say that we can’t let machines write our novels. Why? Because unless someone told us we were reading a novel penned by AI, we might never know the difference.
I thoroughly agree about the pivotal and essential nature of preserving and promoting human writing. It’s among my core values and the foundation of the business I’m pushing to start, the one for which I’ve been begging help (see Issue 369).
In other words, I subscribe to all its major points.
It’s in The Cheat Sheet because the piece also includes this mind-numbing display of ignorance:
Research has shown that so-called AI detection software has a dismal success rate: As New York magazine recently reported, the program ZeroGPT identified a chunk of the Book Of Genesis “93% AI-generated.”
I left the link in so you can see that, in addition to the assertion being untrue, the link actually goes to a story in the Dallas Morning News about an author who interviewed an AI bot, or something. As such, it’s epic rant-bait for me about the abject failure of editorial oversight. I mean, how seriously are we, as readers, to take a story in which no one — not even the writer — bothered to even check the links?
It’s obvious that the Time piece is referring to this article in New York Magazine, which we covered in some depth in Issue 364. As such, it’s also obvious that the writer of the piece in Time did not read it. Or perhaps did not understand it.
The New York Magazine piece is amazing and right. And, for this point about ZeroGPT and the Book of Genesis, it does say that ZeroGPT tabbed the text as 93% AI. As I said at the time, this is a reflection that ZeroGPT is terrible, as well as a failure to understand how AI detection works, as well as how it’s used.
But the annoying part of irresponsibly pulling from the New York Magazine article is that it says:
It’s true that different AI detectors have vastly different success rates, and there is a lot of conflicting data.
But our Time author missed that, I guess — deciding to simply, and incorrectly, declare that “Research has shown that so-called AI detection software has a dismal success rate.” Irresponsible is the most generous I can be.
Accordingly, it seems I have to write this again: junk AI detectors are junk, they cannot stand in for the entire class. Several AI detectors are excellent.
It’s deeply annoying.
Anyway, there’s this otherwise important article in Time that criminally misrepresents the state of AI detection.
NY-Who-Is-This?
About a month ago, I was speaking with a professor at NYU who was somewhat frustrated that the school had turned off AI detection, which had been provided by Turnitin.
Anyway, before going off on the sheer insanity of that decision, I tried to confirm it. I e-mailed John Beckman, Senior Vice President for Public Affairs and Strategic Communications, at NYU seeking to confirm whether the school had, in fact, turned off Turnitin’s AI detector. I e-mailed on May 20, again on May 22, and a third time on May 29. He never replied.
So, I have no idea if NYU is checking for AI or not. I wish I could tell you. But I’m sharing this to make the point that, nearly universally, schools do not want to talk about integrity, or cheating. They just won’t.
If you can’t see a thing, won’t talk about a thing, refuse to hear about a thing — must not be a thing. Right?