348: ASU/GSV 2025
Plus, two things I heard at the ICAI annual conference. Plus, Pangram hosts a webinar. Plus, class notes.
Issue 348
Subscribe below to join 4,478 (+53) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year.
ASU/GSV and Normalizing Cheating
There are a few reasons I am down on ASU/GSV — the annual American education technology and investment conclave.
Quantitatively, most of them are minor. Qualitatively, one is inexcusable — the organization’s insistence on normalizing and whitewashing cheating, while also investing in companies that profit from rampant, obvious academic misconduct.
On this, ASU/GSV is consistent.
A Recent History
In 2023, the conference featured as a keynote speaker on their main stage, Dan Rosensweig, then the CEO of Chegg (see Issue 200). In 2022, the organization’s newsletter circulated interviews with Chegg’s corporate leaders, essentially bragging about how much money there were making (see Issue 163). The group also gave their stage over to Chegg to announce their partnership with OpenAI and ChatGPT (see Issue 203).
But the real sin is that GSV literally invests in cheating companies. You can see Issue 80 on that. Or just look at GSV’s investment portfolio. All the big cheating names are in there — Course Hero, QuillBot, Photomath.
If you’re new to the academic integrity universe, Course Hero makes Chegg look like Snow White. In a speech I gave once, I said that Chegg was Purdue Pharma and that, by comparison, Course Hero (now called Learneo) was a Mexican drug cartel. Both profit by selling illicit and harmful products, but one does so without even pretending.
In case you’re new or have forgotten, Course Hero was designated an academic fraud provider by Cisco (see Issue 42). The company was sued by a university for copyright violations, an accusation so obvious that it defies need for explanation (see Issue 252). Course Hero helped me — yes, me — explicitly cheat on my midterm (see Issue 97). When a professor found his students using Course Hero to cheat on their exams and asked the company to help identify them, Course Hero told him to go pound sand and get a court order (see Issue 102).
In 2023, Course Hero hosted a panel (see Issue 219) that was so offensive that, at the time, I wrote:
Well, Course Hero has done it - something so shameless and inexcusable that all I can do is share it and say, without reservation, that it should disqualify the company from future civil discourse on the topics of education or integrity.
Seriously, if you have a few minutes, go read Issue 219. Appalling is not even the right word.
Apparently, ASU/GSV never got the memo. Less than year later, at their 2024 event, they put the co-founder of Course Hero on stage to discuss “the AI revolution.” He was joined by someone from Google and someone from Guild, another company backed by GSV.
This Year
I share all that because this year’s ASU/GSV Summit is coming up in two weeks, and they still have not gotten the memo.
Among this year’s sponsors are notable cheating providers such as Chegg, OpenAI, and Grammarly — innocuously listed alongside groups such as the Chan Zuckerberg Initiative and Western Governors University, which ought to guard their brands more judiciously.
Predictably, ASU/GSV has a panel featuring someone from Chegg. Ironically, it’s on “Building a Mission-Driven Brand.”
Personally, I cannot wait to hear brand advice from Chegg, which has lost 99.4% of its value in three years and was sued — more than once — for defrauding investors. The subhead of the panel is, “But what does it actually take to build a mission driven company and keep it on course through the inevitable storms?” I am not sure. But whatever it is, I am sure that Chegg does not know. Their stock is trading at seventy-five cents.
I’d also be delighted to hear what Chegg thinks its mission is.
But I digress.
ASU/GSV has a panel featuring both Grammarly and Learneo (Course Hero/QuillBot). With Accenture — a great example of exceptionally bad background work.
There’s also a full panel outright hawking Grammarly’s products. The title is:
Gain Insight into Student Writing in the AI Era with Grammarly Authorship
The very first line of the description is:
As AI-generated content becomes more prevalent, institutions must shift from detection-based approaches
Grammarly says AI is inevitable. Turn off your detection systems.
I flat guarantee that someone on that stage is going to lie and say AI detection does not work, or that it’s risky to ruin student’s careers — or some other such nonsense. All you need to know is that AI detection is bad for their business. That is why they’re spending time and money to tell you that schools “must shift” from it.
Then there’s another panel with Grammarly. This one encourages us to focus “Beyond the Diploma” and it:
will uncover the key workplace trend educators need to be aware of to help future graduates navigate communication overload and harness AI responsibly
There’s a third panel with Grammarly. And a fourth — this one with someone from Western Governors University. Yikes. I just don’t know what to think when schools don’t do their own homework.
OpenAI is included in four panels. One, with the new Secretary of Education, the former wrestling entertainment executive, Linda McMahon. Over the past few years, nothing has been more destructive to the delivery and assessment of education than OpenAI/ChatGPT. Putting the company on the main stage, with the Secretary of Education, is a choice.
It’s a choice that says that ASU/GSV is about earnings, not education. And that’s fine.
At the International Center for Academic Integrity conference a few weeks ago, a few people asked me about ASU/GSV. I told them what I thought, which included that I think ASU/GSV serves an important role in the education business ecosystem. I just wish they cared about the services their companies sell, or the customers those companies claim to serve. There is zero chance they don’t know what Chegg does, or what Grammarly is doing, or what ChatGPT has done to teachers — and worse, to students. There’s zero evidence they will do anything about it, even though they could.
I do not mean to cast ASU/GSV as an Evil Empire. They invest in integrity solutions too, such as Turnitin. Feel free to be confused; it is confusing. It means GSV is invested in the mouse and the cat, which feels like a pretty clear conflict. But at least it shows they play both sides, I guess.
Also this year, ASU/GSV has invited academic integrity leader Dr. Tricia Bertram-Gallant to come to the show and sign her new book, as well be on a panel or two. Although I would not be surprised if the folks at GSV were confused by the title of her book, “The Opposite of Cheating.” Confused or not, it’s a good sign that she’s there. And it’s important to note.
Personally, were I allowed in the building at ASU/GSV, I would not miss this panel for anything.
And, So What?
After all this epic rant, I’m disappointed in GSV because they could make choices that help students learn and prosper. But they don’t. They continue to support, and profit from, companies that actively undermine those goals.
GSV could divest from cheating companies. Or, as part owners, pressure them to stop selling cheating services. Or at least cooperate with educators who have been saying for years that these companies are pernicious and antithetical to learning. ASU/GSV could, at an absolute minimum, stop putting cheating companies on their stage, which helps launder their brands to people who don’t care enough to know more.
GSV has a real opportunity to be a leader, and model integrity and mission. So far, they have decided not to.
Two Troubling Anecdotes from the ICAI Conference
As many of you know, I was honored to speak at the International Center for Academic Integrity conference a week or so ago. It was great, and I recommend it strongly. The 2026 iteration is March 5-8, in Denver, CO.
I did not make it to as many separate sessions as I wanted to, but I bounced in and out of a few. In those sessions, I heard a few things I want to share.
I am not naming the people or the schools involved because I am not sure that those speaking knew they would be on the record, even though I am also not quoting them directly. Additionally, the people — and by proxy, the schools — who attend the ICAI are not the ones I worry about. It’s those who do not attend that give me pause. At least, I tell myself, the people at ICAI are aware that integrity takes work and are engaged in understanding and meeting the challenges.
Still, these two anecdotes and observations have left me breathless since I heard them. And I fear that, if these things are happening at the schools who are trying to address credential theft, I am terrified about what is going on elsewhere.
Enough preamble.
An academic integrity staff member told a group of attendees that at their large, heavily online school, when Turnitin activated their AI detection technology, reporting caseloads spiked. That’s expected and understandable. And it shows, in part, that a great deal of inauthentic work was being done and turned in for grades before this school had the ability to see it. Again, no surprise.
The speaker did not say how much their caseload increased. But it was easy to infer that the jump was in the multiples.
The real issue, according to this telling, was that before the faculty had AI detection, the school’s responsibility case rate was above 90%. Which is to say that in more than nine in ten cases referred for suspected integrity violations, the student was found responsible for the violation, either judged so, or by admission.
And that is great, honestly. Research shows that certainty of consequence is a significant factor when considering misconduct (see Issue 305). In other words, having more than 90% of referrals result in an affirmative finding probably deters a significant amount of attempted misconduct. Of course, if the consequences are inconsequential, that’s different. But a high responsible rate is objectively a good thing.
More importantly, when Turnitin’s AI system went live, in addition to a spike in cases, the school’s responsible rate fell into the 30-40% range, the speaker said.
That too is predictable. It was a new tool — teachers did not know, were not necessarily told, how to use it. I’m sure that many instructors simply saw a high AI probability score and opened an integrity case based on that alone. And as this school was likely correct to conclude, an AI probability score may not be enough — by itself — to sustain an accusation of cheating. So, the school was kicking a ton of cases. Totally fair.
Here, I’d argue that having a student referred for using AI to fake their academic abilities and accomplishments — even if the case is dismissed — probably also deters future misconduct. For most people, being caught, being flagged at all, being scrutinized, being on someone’s radar, is enough to make that person think twice the next time. Not for everyone, but for some. Further, simply having it known that work is being checked for AI text is a real deterrent.
But, but. Seeing the reporting caseload increase significantly and their responsibility rate fall, the school decided to turn off the AI detection — to simply unplug it.
That, to me, is illogical to the very leading edge of insanity.
For one, if the school’s responsibility rate was, let’s say 35%, we can assume that at least some of the students in that number were detected and found responsible because of AI detection. Whatever portion that represented, the school has decided to let that go. Just let it happen — consequence free. Worse, as word gets around that no one is checking for AI anymore, as it absolutely has, the misuse rate is guaranteed to increase. The school seems to think that more AI cheating is worth having fewer cases and a higher responsible rate.
Also, for the record, a responsible rate of 35% does not mean the other 65% were not cheating.
But mostly, a spike in cases and a drop in convictions, to use an easier though not entirely accurate term, is a failure of training — of both faculty and the academic integrity office. Disconnecting the detection device solves neither issue.
Let me give an analogy, if I may.
I often equate AI detection technology to airport metal detection. They scan a large quantity of items, flagging things that may be banned. When an alert occurs, a human does a close inspection.
In the situation, this school said that their metal detector was alerting on a high number of bags, and while the staff were going through bags, they were unable to substantiate serious threats. Or at least not enough to take serious action. Faced with this, their answer was not train the staff to better use the tool, but to turn off the metal detector. I mean, we can’t spend all that time inspecting bags and finding silly hair dryers if we don’t scan the bags — am I right?
High five.
Moreover, even if the school never initiates a single case related to AI detection, wouldn’t you at least want to know what’s going on? I mean, would you not want some visibility, even blurry visibility, into student behavior? Would you not want to know if AI detection rates were double in engineering classes compared to economics classes? Would you be interested to figure out why?
Nope. Guess not. Can’t solve a problem if you won’t see a problem.
A staff member at different school told folks that ICAI that their school had decided to disable their lockdown browser and proctoring service, Honorlock, for “low-stakes assessments.”
The speaker did not say so, but the implication was that most people assume that low-stakes assessments are low security risks, that most cheating happens on high-stakes assessments. Due to pressure and stress.
As I’ve written before, there’s very little evidence to support this idea, despite how many people accept it. In fact, I believe the opposite is true, that students are more likely to cheat in low-stakes assessments for the very reason that they are of little value (See Issue 274, Issue SE1, or Issue 287). If it’s not important to the teacher, it’s not important to the student. If you want to tell students to cheat — making assessments low-stakes and turning off the security is a great strategy.
But turning off Honorlock on low-stakes assessments is not what got me. What got me was that this speaker said, quite matter-of-factly, that the reason that the school turned off the cameras and security alarms was that they did not want to pay for it.
I believe it.
Every school’s “commitment to integrity” is not measured in policy, it’s measured in pennies and paychecks. Show me what you invest in, and I will show you what you care about.
This also fuels a cynical feeling I’ve had for some time now, but never really shared — at least not in public.
I suspect that many of the schools that have made a public show of turning off their AI detection technology by claiming it to be inaccurate, really just want to save the money. The public misconceptions about accuracy are an excuse, a justification that allows free virtue signaling. By turning off their security cameras, a school can be standing up for “fairness,” instead of having to accept that they would prefer not to spend money to protect the validity of their grades and degrees.
Honestly, turning off AI detection — or other security technology — over alleged inaccuracy never made any sense. Since I suspected that, in some cases, this was really about money, I wanted to be wrong. But hearing an academic integrity officer say that their school turned off integrity solutions to save money — well, here we are.
Adding quickly, I cannot verify this. I did not ask. And the school would never admit it, even were it true. It’s possible that the reasoning here — to save money — was just this speaker’s unfounded assumption, like mine. It’s also possible someone told them that saving the money was the reason precisely. I cannot know.
But, as mentioned, I do believe it.
And in both cases, we have to be able to do better.
Pangram Webinar on AI Detection, April 2
Buzz-worthy AI detection company Pangram is hosting a webinar on “The State of AI Detection.”
Details below:
Want to know the current state of AI detection?
Join us on Wednesday, April 2, 2025 from 12pm-1pmET for a conversation with Bradley Emi, CTO of Pangram Labs, who will present a practical guide to working with AI detection tools. We will talk about the academic research on AI detection, teach you some of the linguistic traits of AI-generated writing so you can detect some of the key signs on your own, and finally go over ways to use AI detectors fairly and ethically in practice, alongside other methods of maintaining a standard of academic integrity in your classroom.
Class Notes
There was no Issue of The Cheat Sheet on Tuesday. Sorry about that. I just could not get to it. Not entirely to justify or make excuses — and because a few people asked me recently — academic integrity is not my profession. It’s a passion. And I do my best.
I’ve also decided, with persistent encouragement from a few of you, to start a The Cheat Sheet Podcast. Reviewing recent topics, interviewing some folks. I don’t have any idea what the cadence will be. Or even how to make that happen, from a production standpoint. I’m sharing because perhaps my fans will be excited, and I hope both of you are. But also because I’d appreciate advice or support. Did I mention I have no idea how to do this?
Have an event or doing something in academic integrity that you think people should know about? Send it to me; I will share it. It’s free.
Lumping Chegg, OpenAI, and Grammarly together as “cheating providers” is absurd. We have real, foundational conversations to tackle about education—conversations we probably should have had twenty years ago—but this kind of bluster is just as empty as the hype coming from AI companies themselves.