(374) Two Very Important Articles
One is a must-read. The other, a deep embarrassment. Plus, a kickstart.
Issue 374
Subscribe below to join 4,752 (-2) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
The New Yorker Delivers a Must-Read on AI Text, Cheating, Writing
If you read one article on the state of higher education, the impact of generative AI, and cheating — read this one, in The New Yorker magazine.
Note: Some of the quotes in the story contain profanity.
I’m going to ask again, please click over and read it. The publisher and writer deserve the click and your attention.
The piece starts with a conversation with two students at NYU:
Weeks earlier, when I’d messaged Alex, he had said that everyone he knew used ChatGPT in some fashion, but that he used it only for organizing his notes. In person, he admitted that this wasn’t remotely accurate. “Any type of writing in life, I use A.I.,” he said. He relied on Claude for research, DeepSeek for reasoning and explanation, and Gemini for image generation. ChatGPT served more general needs.
This is one of my favorite truths of covering academic integrity. Students predictably say that everyone cheats — just not them personally. But if you probe, you find that the “everyone” really is everyone, including them.
The conversation with this student goes on:
“We had to read Robert Wedderburn for a class,” he explained, referring to the nineteenth-century Jamaican abolitionist. “But, obviously, I wasn’t tryin’ to read that.” He had prompted Claude for a summary, but it was too long for him to read in the ten minutes he had before class started. He told me, “I said, ‘Turn it into concise bullet points.’” He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.
American education right now. Even in a “screen-free” class, AI is doing the work.
The article’s author asked to see an example of AI-created work the student had turned in. And:
Alex searched until he found a paper for an art-history class, about a museum exhibition. He had gone to the show, taken photographs of the images and the accompanying wall text, and then uploaded them to Claude, asking it to generate a paper according to the professor’s instructions. “I’m trying to do the least work possible, because this is a class I’m not hella fucking with,” he said. After skimming the essay, he felt that the A.I. hadn’t sufficiently addressed the professor’s questions, so he refined the prompt and told it to try again. In the end, Alex’s submission received the equivalent of an A-minus. He said that he had a basic grasp of the paper’s argument, but that if the professor had asked him for specifics he’d have been “so fucked.” I read the paper over Alex’s shoulder; it was a solid imitation of how an undergraduate might describe a set of images. If this had been 2007, I wouldn’t have made much of its generic tone, or of the precise, box-ticking quality of its critical observations.
The student doesn’t want to do the work. Had a professor asked, this would have unraveled. I think we can assume no inquiry was made. The box-checking works both ways.
A second student, Eugene, was there as well. The story says he:
serious and somewhat solemn, had been listening with bemusement. “I would not cut and paste like he did, because I’m a lot more paranoid,” he said. He’s a couple of years younger than Alex and was in high school when ChatGPT was released. At the time, he experimented with A.I. for essays but noticed that it made easily noticed errors. “This passed the A.I. detector?” he asked Alex.
I think we can infer that the submission reviewed above was not checked by an AI detector. If for no other reason than that NYU has, it seems, unplugged theirs (see Issue 371). There are no guards in the towers. The doors are unlocked, open.
By choice.
The story continues:
Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine. Alex said that his art-history professor was “hella old,” and therefore probably didn’t know about such programs. We fed the paper into a few different A.I.-detection websites. One said there was a twenty-eight-per-cent chance that the paper was A.I.-generated; another put the odds at sixty-one per cent. “That’s better than I expected,” Eugene said.
Here, I want to make two points.
The first is what I usually say — that GPTZero and Originality are not good, not reliable. And no one who takes integrity seriously should ever work with Copyleaks — not ever (see Issue 208).
The second is a slow-clap for the author who did not take the inaccurate and cheap shot that AI detection does not work. It does. Instead, he reports the results, exposing the faults of these specific systems. I’m not sure, but I think they used to call that reporting.
So, a third thing. This paragraph unearths a hidden danger when schools turn off their AI alert systems — teachers will use them anyway, on their own. Problem is, they don’t return consistent results, for many reasons. This produces inaccurate, inconsistent, and unfair grading and review.
The reporter continues:
I asked if [the second student] thought what his friend had done was cheating, and Alex interrupted: “Of course. Are you fucking kidding me?”
Of course.
One of the things that frustrates me the most about having covered academic integrity for about a decade now is how not complicated it really is. We all know what cheating is. We all know it’s wrong.
Continuing the conversation, our reporter writes that one of the students:
admitted to me and Eugene that he’d used ChatGPT to draft his application to N.Y.U.—our lunch might never have happened had it not been for A.I. “I guess it’s really dishonest, but, fuck it, I’m here,” he said.
NYU must be beaming with pride.
Even so, we get more, from one of the students:
“It’s cheating, but I don’t think it’s, like, cheating,” Eugene said. He saw Alex’s art-history essay as a victimless crime. He was just fulfilling requirements, not training to become a literary scholar.
It is cheating — but it’s OK because the students themselves have decided what is important and the school doesn’t care. There’s a phrase about inmates and an asylum that I’m not quite brave enough to type here.
Our outstanding author continues:
I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut.
I want that to be true. But my soul knows it’s not. On cue, we get:
But I recently began noticing that some students’ writing seemed out of synch with how they expressed themselves in the classroom.
Surprise, surprise.
Going on:
It’s easy to get hung up on stories of academic dishonesty. Late last year, in a survey of college and university leaders, fifty-nine per cent reported an increase in cheating, a figure that feels conservative when you talk to students.
I don’t know what to add — 59% is a substantial undercount, I agree.
I’m not sure what survey this was, but my best guess is it’s the one we covered in Issue 338, should you want context. If I am right, in that survey, just 19% of leaders said cheating was unchanged, not better, just the same as before — 59 to 19. He thinks it’s conservative. I think he’s right.
Sticking to the flow, the article explores what this era of AI cheating means for teaching, learning, and writing specifically. I liked this:
College, however, is a choice, and it has always involved the tacit agreement that students will fulfill a set of tasks, sometimes pertaining to subjects they find pointless or impractical, and then receive some kind of credential. But even for the most mercenary of students, the pursuit of a grade or a diploma has come with an ancillary benefit. You’re being taught how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of A.I. means that you can now bypass the process, and the difficulty, altogether.
I got nothing. This paragraph is everything. “The arrival of AI means you can bypass the process of learning, the difficulty, altogether.” My powers are limited to repetition.
Continuing:
OpenAI recently released a report claiming that one in three college students uses its products. There’s good reason to believe that these are low estimates. If you grew up Googling everything or using Grammarly to give your prose a professional gloss, it isn’t far-fetched to regard A.I. as just another productivity tool. “I see it as no different from Google,” Eugene said. “I use it for the same kind of purpose.”
Shifting focus again slightly, the rock-star article goes on:
Unable to keep pace, academic administrations largely stopped trying to control students’ use of artificial intelligence and adopted an attitude of hopeful resignation, encouraging teachers to explore the practical, pedagogical applications of A.I.
There’s a cliche that journalism is the first draft of history, which means, sometimes, good journalism lets us see the future — and there it is. Schools stopped trying. Hopeful resignation.
When this chapter of history is written I hope it includes that schools could have taken action, but resignation was easier, cheaper. Choices were made. Have been made. Are being made.
The piece — I say again, it’s great — shifts to professors:
Corey Robin, a writer and a professor of political science at Brooklyn College, read the early stories about ChatGPT with skepticism. Then his daughter, a sophomore in high school at the time, used it to produce an essay that was about as good as those his undergraduates wrote after a semester of work. He decided to stop assigning take-home essays. For the first time in his thirty years of teaching, he administered in-class exams.
More:
Siva Vaidhyanathan, a professor of media studies at the University of Virginia, grew dispirited after some students submitted what he suspected was A.I.-generated work for an assignment on how the school’s honor code should view A.I.-generated work. He, too, has decided to return to blue books, and is pondering the logistics of oral exams. “Maybe we go all the way back to 450 B.C.,” he told me.
Flagging for you that students used AI to turn in papers on the school’s honor code.
The piece continues, with an interview with Dan Melzer, “the director of the first-year composition program at the University of California, Davis.” Included is:
Writing is hard, regardless of whether it’s a five-paragraph essay or a haiku, and it’s natural, especially when you’re a college student, to want to avoid hard work—this is why classes like Melzer’s are compulsory. “You can imagine that students really want to be there,” he joked.
If you’re reading this, I probably need not convince you that writing is hard. It is.
We move ahead:
Almost all the students I interviewed in the past few months described the same trajectory: from using A.I. to assist with organizing their thoughts to off-loading their thinking altogether. For some, it became something akin to social media, constantly open in the corner of the screen, a portal for distraction. This wasn’t like paying someone to write a paper for you—there was no social friction, no aura of illicit activity. Nor did it feel like sharing notes, or like passing off what you’d read in CliffsNotes or SparkNotes as your own analysis. There was no real time to reflect on questions of originality or honesty—the student basically became a project manager.
Please — I beg you — put these most recent few paragraphs of outstanding reporting together. Schools have stopped trying, adopting “hopeful resignation” while students are “off-loading their thinking altogether.”
And here we are — debating whether AI detectors “work,” humoring indulgent beard-scratching about what cheating “is” now. Meanwhile, Rome turns to cinder.
Back to the article. We return to students:
May, a sophomore at Georgetown, was initially resistant to using ChatGPT. “I don’t know if it was an ethics thing,” she said. “I just thought I could do the assignment better, and it wasn’t worth the time being saved.” But she began using it to proofread her essays, and then to generate cover letters, and now she uses it for “pretty much all” her classes.
And:
Kevin had worked as a teaching assistant for a mandatory course that first-year students take to acclimate to campus life. Writing assignments involved basic questions about students’ backgrounds, he told me, but they often used A.I. anyway. “I was very disturbed,” he said. He occasionally uses A.I. to help with translations for his advanced Arabic course, but he’s come to look down on those who rely heavily on it. “They almost forget that they have the ability to think,” he said.
No words.
And please read this:
But many students claim to be unbothered by A.I.’s mistakes. They appear nonchalant about the question of achievement, and even dissociated from their work, since it is only notionally theirs. Joseph, a Division I athlete at a Big Ten school, told me that he saw no issue with using ChatGPT for his classes, but he did make one exception: he wanted to experience his African-literature course “authentically,” because it involved his heritage. Alex, the N.Y.U. student, said that if one of his A.I. papers received a subpar grade his disappointment would be focussed on the fact that he’d spent twenty dollars on his subscription. August, a sophomore at Columbia studying computer science, told me about a class where she was required to compose a short lecture on a topic of her choosing. “It was a class where everyone was guaranteed an A, so I just put it in and I maybe edited like two words and submitted it,” she said. Her professor identified her essay as exemplary work, and she was asked to read from it to a class of two hundred students. “I was a little nervous,” she said. But then she realized, “If they don’t like it, it wasn’t me who wrote it, you know?”
This cannot be acceptable.
The piece continues with an interview of Barry Lam, who “teaches in the philosophy department at the University of California, Riverside.” Lam, the article says, finds AI use by his students:
a potential waste of everyone’s time. At the start of the semester, he has told students, “If you’re gonna just turn in a paper that’s ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach.”
This is very reasonable. Meeting mockery and lack of effort with mockery and lack of effort. Except when you realize that students will earn a degree from the University of California for this beach time. When that sinks in, any rational person will have to ask what it is we’re doing here.
The article goes on to address the relative fiction that professors can design their way out of wholesale cheating:
Professors can reconceive of the classroom, but there is only so much we control. I lacked faith that educational institutions would ever regard new technologies as anything but inevitable. Colleges and universities, many of which had tried to curb A.I. use just a few semesters ago, rushed to partner with companies like OpenAI and Anthropic, deeming a product that didn’t exist four years ago essential to the future of school.
Like points before it, this first draft of history is accurate and damning.
Colleges and universities, which were supposed to be stalwarts of skepticism, beacons of rational inquiry, and places for gathering evidence and not sprinting to evidence-free conclusions, did in fact go from trying to curb AI use to enthusiastic partners at light speed. The main reason for this capitulation of values, this “hopeful resignation,” appears to be that the profit-seeking AI companies told them to — told them they’d be left behind if they continued to be anything but eager partners.
Anyway, the article concludes by revisiting the NYU students from the open, some weeks after the initial sit down conversations. One of the students:
just finished his finals, and estimated that he’d spent between thirty minutes and an hour composing two papers for his humanities classes. Without the assistance of Claude, it might have taken him around eight or nine hours. “I didn’t retain anything,” he wrote. “I couldn’t tell you the thesis for either paper hahhahaha.” He received an A-minus and a B-plus.
CalMatters Publishes an Embarrassment
I don’t understand what’s going on over at the publication CalMatters. And I’m not sure I need to.
What I do understand is that their recent piece on AI detection in higher education is so bad, so lacking in ethics and oversight, that it should be fully retracted and the publisher should apologize.
The new piece has the headline:
Costly and unreliable: AI and plagiarism detectors wreak havoc in higher ed
Before we get too deep, a bit of history and context.
In 2023, CalMatters ran another story on integrity and cheating. That piece was so bad, I named it the worst piece of integrity-related journalism of the year (see Issue 192 and Issue NY 23/24). It had several basic, easy-to-check factual errors and profoundly misrepresented the state of just about everything.
This most recent CalMatters piece is by the same writer who also put together this terrible piece for an outlet called The Markup (see Issue 233). The headline on that piece was:
AI Detection Tools Falsely Accuse International Students of Cheating
Which is clickbait nonsense.
This same writer, the one who wrote the piece we will get to in a second, also wrote pieces at The Markup with these headlines: “He Wanted Privacy. His College Gave Him None.” and “Plagiarism Detection Tools Offer a False Sense of Accuracy.”
So, the author and publication come at this topic with some — cough — history of misrepresentation. Even so, what we got this time is beyond credible. If you care any bit at all about honesty or journalism, this is heartbreaking.
I cannot go through every error, every misrepresentation, every bias. There is no room or time for that. I’ll try to give a few.
The first paragraph concedes that “many” students are using AI to cheat. Then there’s the second graph:
And as faculty members grapple with what this means for grading, tech companies have proved yet again that there’s money to be made from panic. Turnitin, a longtime leader in the plagiarism-detection market, released a new tool within six months of ChatGPT’s debut to identify AI-generated writing in students’ assignments.
Two things jump out to me from this early framing.
One, panic. Panic implies irrational over-reaction, unjustified response. The piece sleight-of-hand pivots the problem away from cheating, to the “panic” about it. Cheating — not as big a deal as we’re making it. Panic about cheating — the real problem.
Two, it’s fascinating that even if we concede that there is a “panic” over cheating, the ire of this piece is not aimed at the for-profit technology company that caused the panic. It’s instead aimed at the for-profit company trying to mitigate it. I mean, many people are robbing banks. But how dare a company profit by selling alarms and security cameras. It’s an outrage.
Unfortunately, the piece continues:
But the technology offers only a shadow of accurate detection: It highlights any matching text, whether properly cited or not; it flags everything that mirrors AI’s writing style, whether a student used AI inappropriately or not. And Turnitin licenses this technology to colleges while demanding “perpetual, irrevocable, non-exclusive, royalty-free, transferable and sublicensable” rights to student writing. The company has used these rights to build a massive database of student papers, which it uses to market the superiority of its existing products as well as build new ones, including the AI detector.
It should be clear from this that the author does not understand how these systems work or how teachers use them. I hate that I have to explain it.
If an AI detector flags a section of a paper that is properly cited, so what? The teacher or professor will see it, notice the correct citation, and move on. If the detector flags text that was AI but was used appropriately, same thing. The professor will see it, note it, and move on. Nothing happens. Nothing about any of that is a failure of the technology.
The second part about Turnitin using student papers to build their detection archives and models, I have never liked. I get why it is, or was, important — to keep student B from copying the work of student A. But it is not good.
I think, however, that Turnitin allows professors and other end-users to turn that feature off, to keep papers out of Turnitin’s repository. If I am wrong, please correct me, and I will correct this. But if I am right, CalMatters didn’t mention it. They used “demanding” the rights.
Helpfully, CalMatters has finally nailed the real problem:
This investigation revealed institutions willing to renew Turnitin subscriptions year after year despite the cost, faulty technology and concerns about privacy and intellectual property raised by the company’s ever-expanding database of papers.
Schools are renewing Turnitin, even though it costs money and people have “concerns.” Got it.
But we go on:
Turnitin tries to make the gray area of academic dishonesty into something black-and-white, and many faculty members drive demand, searching for the promise of algorithmic accuracy.
Excuse me?
“Academic dishonesty” is a “gray area?” Someone needs to explain that. Or better yet, defend it.
They also need to explain how Turnitin is trying to make the issue black-and-white. I do not think, in the entire history of the company, Turnitin has ever declared anything to be honest or dishonest. Not once. This is entirely through the looking glass to the point of being incomprehensible.
The article also says this:
Turnitin’s default settings, once integrated, are to scan every assignment, not just those that professors suspect are plagiarized. Besides accelerating the growth of the company’s student-paper database, this puts the tool in front of faculty members who otherwise wouldn’t have used it.
So?
Teachers are incapable of accepting, processing, and using the information they want or need, I guess. If putting a tool in front of people who would not otherwise use it is a problem, let me introduce you to OpenAI.
Moving ahead:
When the pandemic shut down in-person instruction on campuses nationwide in 2020, fresh anxiety over academic integrity created another windfall for Turnitin.
Note the agent in this sentence is “anxiety,” not cheating.
So far, my issues with this article are about framing and tone, circling panic as the problem instead of the cheating.
But these next parts are such bad journalism as to defy a fitting criticism. The article quotes Jesse Stommel, who it identifies as “an associate professor at the University of Denver.” It says correctly that he started criticizing Turnitin in 2011.
What it does not say about Stommel is that he has been critical of all types of assessment security, essentially forever. He opposed exam proctoring, plagiarism checking, and, of course, AI detection. Stommel is an advocate for “ungrading” — removing grades from education entirely. He’s appeared at multiple events with mega-cheating provider Course Hero, now Learneo (see Issue 29). In 2023, Stommel told Inside Higher Ed — who else? — that plagiarism detection was unreliable (see Issue 189). He said he:
agreed that plagiarism detection tools had been “plagued” by false positives and there was no reason that the same will not be true of AI detectors.
Before it even existed, in other words, Stommel was against it.
In that same article from 2023, Stommel also said it was a:
fact that “when students cheat, it’s usually unintentional or non-malicious”
So, let’s just say that this interview subject is not an unbiased observer. He’s the guy you go to if you want someone to criticize Turnitin. Which would be fine, I suppose, if CalMatters had quoted anyone who would defend the company on equal terms. They did not.
About Stommel, the CalMatters piece does say that his course syllabus includes:
“It is my commitment to you that I will not submit any of your work to Turnitin. Plagiarism-detection software like Turnitin monetizes student intellectual property and contributes to a culture of suspicion in education. I trust you. I trust that your work is your own.”
Again — he’s an anti-Turnitin, anti-assessment security advocate. And I’ll go ahead and predict that plagiarism rates in his classes are bumping up against the 100% mark.
The article goes on, sadly:
routine surveillance and violations of privacy are an unavoidable part of college life.
I was on a video camera when I used the checkout counter at Lowe’s this weekend. We are recorded everywhere we go where we exchange, or deal with, anything of even remote value — Uber, ATMs, gas stations, speeding tickets, fast food joints all use cameras to record everything. That’s not a part of college life, that’s a part of life. You don’t have to like it, but to single it out as a creepy feature of college in particular is, well, not true.
Incredibly predictably, the piece veers into “false accusations,” a facade for which I am out of patience. And covers how black students say they’re more likely to be falsely accused, and about the, “vulnerability of non-native English speakers subjected to these detectors.” Subjected to. Like it’s waterboarding.
Continuing:
Jasmine Ruys, who oversees student conduct cases at College of the Canyons, said she often sees students who think they were wrongly accused but unwittingly used AI.
“There are students who go to other colleges where that college purchases Grammarly, the pro version, installs it on their computers to help them and when they turn it in to our instructors, it gets caught by the AI detector,” Ruys said. “It’s hard because students think they’re doing the right thing and they’re getting caught up in it because [AI] is just embedded in everything.”
I shared that because if you’re using Grammarly, you are probably using AI. The detector is not wrong.
Ruys is also, by the way, the closest CalMatters gets to quoting an actual expert on academic integrity. She says that students think they were wrongly accused, not that they are.
The piece has an entire section with the title, “The illusion of a solution.” In it is this:
The [Turnitin] AI detector determines what portion of an assignment was probably generated by AI but leaves it to faculty members to puzzle through whether that AI-like text represents a violation of academic integrity.
Let me be calm.
Yes, that is exactly how AI detection works. It flags, a teacher decides. How is this a problem? What is the complaint here? No one wants the software to decide that any specific use of AI “represents a violation of academic integrity.”
Finally, and I promise I am nearly done, we return to Stommel:
Stommel, the University of Denver professor, said faculty members often believe cheating is on the rise and that detectors are one of the only ways to keep students honest. But he shares evidence that cheating rates have long remained largely flat, even post-ChatGPT. He also pans the use of Turnitin as a scare tactic. “We see this with the criminal-justice system,” he said. “Deterrence doesn’t actually work.” What does work, he argues, is building trusting relationships with students. “Turnitin immediately fractures that relationship with students.”
Cheating is on the rise, as much as it can be, considering it was already nearly universal. His evidence, which CalMatters just links to without comment, we covered in Issue 261.
The short version is that it was a survey of self-reported cheating, a survey of students in “more than 40 high schools.” And the survey did find that cheating rates did not go up after ChatGPT, probably because they were already at the ceiling:
some 60 to 70 percent of students said they had recently engaged in cheating
Self-reported. So, 60-70 is really what? 90-100? Cheating is flat because it has nowhere to grow. So, don’t panic.
And I don’t know how to point out to Mr. Stommel that he’s wrong about deterrence. The research on academic integrity and risk/reward is abundant, clear, and overwhelming. But, again, CalMatters says nothing — just leaves the thing there, like it’s credible.
If you followed me this far, the end of the piece is what destroys it. Beyond all doubt.
These are the final two paragraphs:
Sean Michael Morris, a frequent co-author of Stommel’s and a longtime educator, also tries to convince faculty members they don’t need Turnitin despite the company’s marketing, which champions its value to academic integrity.
“Ed tech does a good job of convincing you that it is huge, permanent, there, 100% necessary for education,” he said. “That’s its big lie.”
CalMatters does not inform its readers that Morris literally worked for Course Hero. He was the “VP of Academics” for three years. Dude literally worked for a cheating company. CalMatters says he’s an author and “longtime educator.”
Morris has, like Stommel, crusaded against every form of assessment security that’s rooted in technology — from exam proctoring to AI detection. I’ve written about Morris several times, such as in Issue 123:
Morris, you may remember, opposed academic integrity tools such as plagiarism detection and test proctoring (see Issue 90) and supposedly recently and conveniently said out loud that he, “didn’t like the idea of assigning blame to specific entities as to why or how students cheat,” (see Issue 102).
If you’re up for it, jump over to Issue 219 to read about the time Morris appeared on a Course Hero panel with someone who likened remote test proctoring to eugenics, selective breeding, sterilization, genocide, Nazism, white supremacy, sexism, ableism, cis/heteronormativity, and xenophobia.
But CalMatters tells readers none of that. Or even that Morris worked at Course Hero. We only get that he hates Turnitin, which, for CalMatters, was the entire point.
An Introduction, A Start, With Kickstarter
Some readers have kindly asked about the new project I have started. Thank you.
It’s a project for writers, and downstream, readers.
Rather than using AI detection to find suspected bad behavior, we’re using it to reward writers who do their own work. Those of us who put in the work — and respect and value the craft of human writing — we need it.
The company is called Verify My Writing and we’ve put together a webpage. There is not much on it now, just a short description and a place to sign up for updates and social media.
More importantly, we’ve also launched a campaign on Kickstarter to help us build and launch the service. We’re trying to raise $15,000 on Kickstarter, in smaller contributions of $50, $100 and so on.
If, like me, you recognize the value and importance of human effort and creativity in human writing, I’d be honored if you’d help us by making a pledge.
What we’re doing is about writing in a general sense — not specifically about academic integrity, or even academic assignments. It’s unlikely that a student at any level would put in the time and investment to scan and verify their work. In fact, I may encourage them not to do that.
Instead, this is for writers who work as long-form journalists, write books, screenplays, or write academic research. We need a market signal for our editors, publishers, agents, and for the general public.
Readers will benefit too, I believe, from a level of trust that what they’re reading is from a real human. In media, research, and everywhere, I think that’s important. Even vital.
I hope you think so too.
And thank you.