“Grammarly helps me detect plagiarism percentage before submitting my work”
Plus, TechCrunch is late. And wrong. Again. Plus, on Google Docs.
Issue 284
Subscribe below to join 3,799 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 18 amazing people who are chipping in a few bucks via Patreon. Or joining the 36 outstanding citizens who are now paid subscribers. Thank you!
A Survey on Generative AI Attitudes and Uses
Researchers at the University of Liverpool published a paper recently on a student survey and focus groups related to how students think of generative AI tools such as ChatGPT and Grammarly, and how they may be using them.
The authors, all from the University of Liverpool, are:
Heather Johnston, Rebecca F. Wells, Elizabeth M. Shanks, Timothy Boey, and Bryony N. Parsons
I won’t spend too much time on the paper because I think it is not too helpful in understanding or addressing academic integrity, however, it is good in that it correctly lumps Grammarly in with ChatGPT, QuillBot and other AI-powered writing and rewriting tools. It’s also clear from the survey that people still view Grammarly and ChatGPT differently, even though they now do much the same thing. Most students think using Grammarly is fine, while they feel considerably less comfortable with students using ChatGPT. That’s going to take some time to change, as more people learn what Grammarly actually does now.
Anyway, the survey does find that:
over half [of the students] had used or considered using these for academic purposes.
But the reason that’s not really helpful is because the research team defined using these tools for “academic purposes” with these examples:
checking grammar and formatting references
So, for integrity purposes, not really enlightening. The paper also finds that:
Most students (48.5%) stated that they were unsupportive and thought it would be unfair for students to use these technologies and that those who do are effectively copying the work of others without giving credit
But again, that does not feel like news. Most people disapprove of cheating.
What I really want to share though is this quote from a student in one of the focus groups, regarding Grammarly:
“Grammarly helps me detect plagiarism percentage before submitting my work”
Let’s take two seconds here and think about why a student would find this helpful. Under what circumstances would someone be interested in knowing their plagiarism score? And why would having that information before you submit academic work be beneficial?
It’s a genuine mystery.
That Grammarly provides this service means they’re not just your simple grammar checker anymore.
TechCrunch: Open AI Is Selling Apps to Bypass AI Detection
TechCrunch, which has made misreporting on academic integrity into an art form (see Issue 191), has a piece out this week on the Open AI store — a market in which independent developers can sell software tools that use or connect to Open AI’s products such as ChatGPT.
As we wrote back in January in Issue 268, the Open AI store is littered with products that will erase ChatGPT’s fingerprints and make automated text undetectable.
Months later, TechCrunch is on the case, writing:
OpenAI’s terms explicitly prohibit developers from building GPTs that promote academic dishonesty. Yet the GPT Store is filled with GPTs suggesting they can bypass AI content detectors, including detectors sold to educators through plagiarism scanning platforms.
One GPT claims to be a “sophisticated” rephrasing tool “undetectable” by popular AI content detectors like Originality.ai and Copyleaks. Another, Humanizer Pro — ranked No. 2 in the Writing category on the GPT Store — says that it “humanizes” content to bypass AI detectors, maintaining a text’s “meaning and quality” while delivering a “100% human” score.
TechCrunch is genuinely clueless to the fact that Copyleaks powers some of the systems that alter writing to bypass AI detection (see Issue 208).
Further, unable to reconcile why companies are building and selling AI detection bypass tools with their reporting that AI detection does not work, TechCrunch writes:
Now, we’ve written before about how AI content detectors are largely bunk. Beyond our own tests, a number of academic studies demonstrate that they’re neither accurate nor reliable. However, it remains the case that OpenAI is allowing tools on the GPT Store that promote academically dishonest behavior — even if the behavior doesn’t have the intended outcome.
This is just wrong, of course.
Rather than people building and selling products for no good reason, the reality is much simpler. Good AI detection systems work. The illicit tools that promise to bypass those systems work too, which is why they exist.
It seems that TechCrunch and others in the “don’t use AI detectors” camp want to have it both ways — they insist that AI detectors are inaccurate and unreliable and also insist that AI detectors are useless because they can be easily bypassed. Both cannot be true, which is an awkward box that TechCrunch wandered into. So sure are they that AI detection does not work, that they have to really stretch themselves to condemn what is obviously wrong.
Bad as their reporting is, it nonetheless has news. TechCrunch got a response from OpenAI on the bypass services they’re hosting:
The OpenAI spokesperson said:
“GPTs that are for academic dishonesty, including cheating, are against our policy. This would include GPTs that are stated to be for circumventing academic integrity tools like plagiarism detectors. We see some GPTs that are for ‘humanizing’ text. We’re still learning from the real world use of these GPTs, but we understand there are many reasons why users might prefer to have AI-generated content that doesn’t ‘sound’ like AI.”
Classic two-step. Cheating is bad. But, you know, it may be OK because some people “might prefer” to hide the fact that they’re using AI. Way to take a stand.
But it’s typical. Here’s a repeat of what I wrote in Issue 268, linked above:
As I have written and maintained for some time now, OpenAI/ChatGPT is no friend of academic integrity — they are designed to plagiarize and deceive. The company has falsely and actively dismissed, discredited, and avoided any efforts to detect AI-generated text (see Issue 241). It has promised watermarking but done nothing to produce it. It has actively partnered with cheating companies (see Issue 203).
We will all wait to see whether OpenAI does anything to remove these products from their store.
Drafts Matter. Again.
Dr. Danya Ford, who appears to be a professor at Grayson College (TX), wrote a brief article in Faculty Focus on a solution she’s been trying for her written assessments — Google Docs.
The ability to see revisions and previous versions of documents is, Ford wrote, very helpful in determining authenticity and integrity.
The professor mishandles some of the basic information about integrity and AI detection, which is, unfortunately, entirely common. Nonetheless, she is entirely right that being able to observe or uncover the process of writing can be not only informative but definitive for issues of integrity. Many new and promising integrity solutions have moved into this space for that reason.