The Cheat Sheet is a free, short, regular e-mail newsletter with the most relevant news, academic research and material on academic integrity and cheating. To get it, all you have to do is subscribe.
To see past newsletter updates visit: The Cheat Sheet
The Cheat Sheet is published by Derek Newton, who has written on cheating and academic integrity in numerous publications including The Atlantic, The Washington Post and Forbes. He served as VP at a public policy think tank and wrote speeches for a Florida Governor.
I expect that you've seen Arslan Akram's (Superior University, Pakistan) research published last fall in Advances in Machine Learning & Artificial Intelligence?
Akram tested a handful of AI detection tools on thousands of samples of human-generated and AI-generated content. Several tools reported remarkably high precision (this is the most important thing, imo) in detecting AI-generated content. Originality's F1 score was 97. /Ninety-seven/!
That said, it's not a perfect study: The AI-generated content was produced by "a wide range of AI models" which included GPT-4 alongside more primitive LLMs. Akram didn't test human-modified AI-generated content, which I expect is a common practice. Akram also left out two of the more popular detection tools: Turnitin and Copyleaks.
https://www.researchgate.net/profile/Arslan-Akram-10/publication/375225294_An_Empirical_Study_of_AI-Generated_Text_Detection_Tools/links/6543ab89bb8314438dbf0279/An-Empirical-Study-of-AI-Generated-Text-Detection-Tools.pdf
To be fair, ChatGPT didn't create any of those tools; users did. There are also a billion "girlfriend" GPTs, and it didn't create them either. OpenAI has rules against "Engaging in or promoting academic dishonesty" and "fostering romantic companionship" but it's not clear they are enforcing them.
https://openai.com/policies/usage-policies