Turnitin Turns on AI Text Technology
Plus, ASU/GSV loves Chegg. Plus, some incredible denial on the newest EdSurge pod.
Issue 200
To join the 3,163 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 13 amazing people who are chipping in a few bucks a month via Patreon. Or joining the seven outstanding citizens who are now paid subscribers! I don’t know how that works, to be honest. But I thank you!
Turnitin’s AI Checker Goes Live
By the time you read this, Turnitin - probably the world’s largest academic integrity company - will be officially in the AI-detection game.
The company announced it will activate its new AI feature inside its existing systems, immediately reaching a staggering 62 million students and more than 2 million instructors. Wow.
Turnitin, as you likely know, is not the only game in town on AI-detection but they are the only one with anywhere near this kind of reach.
Most importantly, having AI alert systems in the hands of that many people in that many classrooms is a big step in stopping the inappropriate use of AI writing systems. That’s because the best solution to academic cheating is preventing it. Updating the cliché - an ounce of deterrence is worth a pound of detection. With AI-detection systems in so many schools, the risk of being caught now goes from zero to not zero. As we know from research, not checking for cheating is seen as inviting it. With detection in place, students will think twice - or at least they should.
Side note: Predictably, Inside Higher Ed, botched their coverage of the new AI-detection feature from Turnitin.
First, they refer to it being in “preview.” Nope. It’s on - for 62 million students.
IHE then offers space for people to criticize the speed with which Turnitin released the feature, saying it was too fast - ignoring that Turnitin said this product has been in development for more than two years and that the company was the very last one to release its AI-detection product. Other companies have had detection products available and in use in schools literally for months, but Turnitin was too fast. Sure.
But the big foul here is that when there are two things on the table - cheating and efforts to stop cheating - Inside Higher Ed reliably finds problems with the latter. IHE says the new anti-cheating product has “academics worried.” Seriously, three-quarters of their coverage is about what’s wrong with this effort to stop cheating.
Here, I remind everyone that IHE has advertised and promoted cheating providers and repeatedly downplayed or openly dismissed cheating (see Issue 78 or Issue 49 or Issue 102 or Issue 195). It’s a pattern.
ASU/GSV and C-h-e-g-g
If you’re not familiar with ASU/GSV, I’ve described it as the smoke-filled room of education technology and venture capital. It’s where the deal makers and deal seekers go to find one another. The ASU is Arizona State University. The GSV is an investment firm.
GSV has notable investments in several cheating companies including Photomath, CourseHero, Learneo and Quillbot, plus several other companies that look and feel too much like they’re also in the cheating business.
Then this week, ASU/GSV announced their featured entertainment/speakers for a night of the conference. Spotted by a friend of “The Cheat Sheet,” the anchor speaker for ASU/GSV on Monday night is Dan Rosenweig, the CEO and President of Chegg.
Having the CEO of a company that’s facing numerous investor law suits (see Issue 167) and that has lost a jaw-dropping 85% of its peak stock value is an interesting choice for a keynote speaker - especially to room full of founders and investors.
Speaking of things disappearing, it’s fitting that Chegg is sharing the stage that night with a magician. No, really.
Also on the billing is Daryl “DMC” McDaniels, of Run-DMC fame. McDaniels spoke at a CourseHero event a few years ago (see Issue 35).
I have never fully understood why Arizona State would want its brand reputation tangled up with the likes of Chegg and CourseHero. At least GSV is consistent, I guess.
Staggering Hubris, Yale and a Question - from the Ed Surge Podcast
EdSurge released a new podcast episode recently with the title:
Inside the Quest to Detect (and Tame) ChatGPT
It’s a good and necessary topic and one we’ll be discussing for a long, long time to come.
The new offering features interviews with Sal Khan, the founder of Khan Academy, which, whatever. And Eric Wang, the VP of AI at Turnitin, which is timely and well chosen. Also on interview was Edward Tian, the creator of AI-detector GPTZero, which makes me want to clear off a nice space on my desk and bang my head there repeatedly.
But it’s the interview with the other guest - “Alfred Guy, director of undergraduate writing at Yale University” - that’s simply stunning. It’s worth the entire episode by itself. The Wang part is highly recommended. But the parts with Guy - which begin at about 19:20 - are can’t miss.
He begins:
I think this might come out a little arrogant but I feel unlikely to use a ChatGPT detector because what I understand, they’re able to do is to measure the statistical likelihood that a large language model has produced the text and in a sense to recognize what the output is, as something a machine would write. My belief is, if I can’t tell that the output is not good enough, then I oughtn’t to be teaching students to do this kind of work. So it will be my dissatisfaction with the output that will drive my feedback to the student
He’s right. That came out a little arrogant. Maybe more than a little. I can’t comment on whether he should or should not be teaching.
But we’re not done. He goes on:
You know, like all universities, Yale pays some attention to the threat of plagiarism. We want students to do their own work. We have some fairness concerns.
Some attention.
And:
Our experience has been, and we have some data to back this up, students generally do their own work here.
Sure. I’ve got data too.
The student paper at Yale surveyed students in 2021 (see Issue 104) and found that:
28.57 percent of respondents, reported committing academic dishonesty during their time at Yale.
Nearly 30% admitted to academic dishonesty. More than half of Yale students admitted to cheating in science classes. A Yale student told the paper that cheating was “very ethical” because the classes were hard.
The same story reported that in 2019, Yale had 30 cases of academic misconduct. Not 30 percent, thirty. Total. It also quotes the Chair of the English Department at Yale:
I’m glad to say I haven’t encountered academic dishonesty among my students in English
Want more? Sure you do.
In April 2022, I covered this news from Yale in Issue 106:
this week the student newspaper carried news about 81 students in a single Introduction to Biological Anthropology class who were referred to the school’s academic integrity committee for misconduct. There were 136 students in the class. That’s 60%.
Yale gave an open-book, open-note, unproctored exam and 60% of the class was caught cheating. At the time, I wrote:
The other bullet point from this Yale debacle is that, if you look for cheating, you’re likely to find it. The only reason educators can honestly say that cheating doesn’t exist is because they don’t want to find it.
Which brings us back to Mr. Guy, who does not want to use an AI-detector and says that “generally” students do their own work at Yale. Sure. Denial, as they say, is not just a river in Egypt.
Guy goes on:
I feel like that when ChatGPT can do work that’s so good that I would need a machine to help me find it, I would need to have my teaching better and not my detection better because threats and punishment have not so far been a very important part of the incentive for students doing their own work.
Here, I can say with absolute certainty that this Guy has no idea what he’s talking about. None. There’s a metaphoric mountain of academic evidence and research disproving this. I understand why people don’t want to believe it, but that does not make it go away.
Moreover, I’m not sure what the link is between the AI getting good enough to fool this very confident teacher - which it more than likely already is - and teaching better. If students are using AI to submit work and you’re not catching it, it is not a failure of teaching. It’s a failure of not trying to catch it. You could, you just don’t want to.
And finally, my question - if you refuse to use the detection tools that are available, how will you know if the AI is fooling you?
Anyway, this episode is good and also incredible.