338: Survey: By 3:1 Margin, College Leaders Say Cheating Has Gone Up
Plus, Anthropic blows my mind. Plus, class notes and a clarification.
Issue 338
Subscribe below to join 4,283 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 46 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month or $80 a year, and corporate or institutional subscriptions are $240 a year. Thank you!
Report: 95% of College Leaders Say AI Will Increase Academic Integrity Concerns
The American Association of Colleges & Universities (AACU) and Elon University issued a new report, a survey of “college and university leaders.” It focuses on — what else? — AI use.
The good news is that it asks about academic integrity. The bad news, well, the results are bad.
Some key findings, copied from the survey summary:
School leaders say “student use of AI tools is nearly ubiquitous”
59% of surveyed leaders say cheating has increased on their campus since GenAI tools have become widely available
54% do not think their faculty are effective in recognizing GenAI-created content.
A bit further into the report, we get more detail:
• Cheating increase: 59% of these leaders report that cheating has increased on their campuses since GenAI tools have become widely available; 21% say it has increased a lot.
• Detection of GenAI content isn’t great: More than half of these leaders do not think their faculty are effective in recognizing GenAI-created content. Some 13% believe their faculty are “not at all effective” in spotting this kind of content and 41% think their faculty are “not very effective [sic]
To repeat, because I think too many people are not getting the message — nearly six in ten college leaders say cheating has gone up since AI became a thing, and more than one in five say cheating has increased “a lot” after the release of generative AI.
And, no, faculty are not good at recognizing material that is generated by AI, at least not on their own.
As is often the case, when you look at the actual numbers in a survey, they are worse than a summary conveys. For example, the 59% who said cheating had increased, that’s not 59% to 41%. On that question, 22% said they “don’t know,” which is its own problem. But the total picture is that 59% saying there is more cheating, 19% say it’s “unchanged.” 59-19, for the record.
On integrity, the survey also found that, when asked about anticipated negative outcomes from generative AI, the most common response was “Increase concerns about academic integrity.” The results: 95% to 4%. Only 1% of respondents said “not much” about increased integrity concerns. Further, 56% of respondents said they were concerned “a lot.”
The fire alarm is buzzing, people. It has been for quite some time.
If you open the report, please also give a look at slide 16 on page 19. It presents four hypothetical student uses of AI and asks whether such a use would be considered “legitimate” or “this is using Gen AI tools to cheat.” It’s very, very interesting.
For those who don’t pull up the slide, here it is, shared with permission:
I’m not going to go deep here, except to say that the survey population is college Presidents, Deans and other leaders — not teachers. I suspect responses to these questions from teachers would yield different results. I also suspect that, if that is true, it’s a problem.
Still, that just 24% of school leaders think asking AI what to write for an assignment — asking AI to give a “detailed outline” of a response — is cheating is incomprehensible. That 51% say getting a writing outline from AI and using it for your academic work is “legitimate,” is — incomprehensible. I cannot comprehend it.
If that is true, I regret to say that we are lost.
Maybe nothing I say — nothing anyone says — matters. But I will nonetheless place in the record that the AACU also appears manifestly out to lunch on the topic of integrity and cheating. In the Forward to the report, from the Association President, we get:
The fact that 95% of the leaders surveyed are concerned about the impact of Generative AI on academic integrity, 92% worry about undermining deep learning, and 80% fear the exacerbation of existing inequities due to the digital divide points to the need for both democratizing opportunity by closing the skills gap and for building AI competencies.
What?
School leaders are nearly unanimous about the impact of AI as a cheating concern and the AACU says the solution is “democratizing opportunity by closing the skills gap” and “building AI competencies.” That’s indescribably obtuse.
Not for nothing, that paragraph also says 92% of school leaders worry about AI undermining deep learning. But AACU: “close the skills gap” and “build competencies.” They cannot be serious.
In case anyone ever wonders how we got here, there is part of it. For the record.
Anthropic Blows My Mind
Anthropic is an AI company backed by Google. Their signature product is Claude.
Recently, a friend sent me a job posting at Anthropic — a Communications Lead, Trust and Safety, Policy Communications. They thought I’d be a good fit, given where I am in the AI/integrity universe. And I would be.
In reviewing the application process, I was blown away by this:
While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.
They then ask applicants to write why they want to work there — in 200 to 400 words. Without AI.
An AI company — a big AI company — wants to understand you and evaluate your “non-AI-assisted communication skills.”
First, that’s amazing. Good for them, one million percent. Understanding actual people is essential in every engagement we have. And communication skills, actual communication skills, are really important. It’s a delight to see an AI company be clear about this.
It also raises two points I feel compelled to share.
One, as I have said before, if you’re seeking a job, and your primary skill is asking AI what to write, companies can probably do that already; they don’t need you. Especially it seems that an AI company does not need you. They probably already know how to ask AI what to say.
Two, I’d like everyone in higher education who has said something like, “AI is here and people will have to use AI in their jobs and so we have to teach students to use AI so they can get those jobs,” to sit down. A student may — or may not — be asked to use AI in their job. But AI is not necessarily going to help them get one. If they cannot write 200 words in their own voice without AI, good luck. It may not be universal, but in at least this case, the failure to force students to write on their own is dooming their chances of getting a job.
Let me say again — this is an AI company that expects you to be able to communicate, on your own, like a person. So, I beg — teachers and other education leaders, please take note.
Still, blown away.
Class Notes, Clarification:
There won’t be an Issue of The Cheat Sheet this coming Thursday or the following Tuesday, as I’ll be out of town. We will pick back up on Thursday, February 6.
The reporter who wrote the piece for Minnesota Public Radio that we featured in Issue 336, wrote in to clarify a point about the student in the story — the one who said he was expelled for AI use even though he did not do it. In our review we said that the student “was accused of using AI in academic work” three times before the final accusation and dismissal. The original reporting says, “three other instances of accusations raised by others against him.” The clarification is that not all three of those accusations were related to using AI. The reporter says that just one of the three was AI-related. I have no information on the other two. I apologize for the lack of clarity.