There's Something in the Water at University of Texas, Austin
Plus, update on Grammarly and North Georgia. Plus, accounting firms ban use of AI in job applications.
Issue 282
Subscribe below to join 3,788 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 17 amazing people who are chipping in a few bucks via Patreon. Or joining the 35 outstanding citizens who are now paid subscribers. Thank you!
Something in the Water in Austin
In Issue 180, I shared that the University of Texas, at Austin had not responded to repeated queries from me about the school reportedly shutting off its AI detection system.
Coincidentally, after I published that update — or lack of an update — the school responded. That reply was, in part:
Yes, the University’s Academic Affairs and Academic Technology teams advised faculty not to use Turnitin as it was not sufficiently accurate to warrant its use and turned off the AI detection tool in the platform. The University continues to monitor the development of these tools and have offered several resources and recommendations to faculty around generative AI
I’m going to avoid debating the accuracy of AI systems here, though I think the evidence of the accuracy of the good systems is compelling (see Issue 250 for just one example).
Instead, I will try to point out the unbelievable irony of an institution supposedly dedicated to research and inquiry deciding it does not want information — would rather literally turn it off than consider it. And mention briefly how banning detection systems shows shocking distrust for educators by implying that faculty cannot help but blindly follow an AI similarity report.
But mostly, I want to linger a bit on what appears to be going on at UT Austin. Because, whatever it is, it’s weird. And not in the good way.
For example, UT Austin was one of the only schools in the entire world to report that incidents of cheating went down during the pandemic (see Issue 50), while other schools were reporting increases of 2x, 3x, even 7x.
As everyone likes to point out, cases of reported cheating are a poor proxy for actual cheating. If cases go up, it does not necessarily mean that cheating went up. The increase could be driven by better detection and/or more consistent reporting. At the same time, knowing what we know about cheating trends, cases going down almost certainly does not mean cheating went down. A decrease in cases, even more so than any reported increase, is very likely correlated to a change in detection and reporting — especially during the pandemic’s shift to online learning and assessment.
Still, UT Austin said their academic misconduct case numbers went down during the pandemic.
At the time, the school said that the number of reported incidents went down because they’d gotten tough on cheating before the pandemic. In other words, they were already good enough to deal with a cheating surge so, it’s not strange that their report numbers did not spike like everyone else’s did. As NPR reported at the time:
And at least one school, the University of Texas at Austin, found that reports of academic misconduct cases actually declined during the pandemic. Katie McGee, the executive director for student conduct and academic integrity there, explains that before the pandemic, UT-Austin had toughened its ability, through software, to detect cheating.
Software to detect cheating — you don’t say.
The problem is, that’s probably just untrue.
That same year, in 2021, UT Austin released a report that criticized its own cheating-detection software. In fact, the report from the school’s “Academic Integrity Committee about Online Testing and Assessment” found that their assessment proctoring software had resulted in just 27 reported cases of cheating. Twenty-seven. From a published enrollment of more than 50,000. And only 13 of those 27 cases were upheld, according to the report. So, the committee recommended — wait for it — not using the software. Faculty were advised to “find alternatives” to remote exam proctoring during the pandemic.
From that UT Austin committee report:
To be clear, the small number of students whose cases of academic integrity were upheld does not mean that there were not more widespread instances of inappropriate behavior on quizzes and tests in the online environment last year—only that the methods we currently use to proctor online quizzes and exams are not sufficient to detect academic integrity violations.
Yup - it’s not that people we not cheating at UT, Austin — it’s that the school’s detection systems were “not sufficient” to catch it. And the school’s proposed solution was not to get better at detecting cheating, it was to stop trying. One, that makes perfect sense. Two, see a pattern?
My point, in case I’m being too subtle, is that UT Austin seems to be using a Texas version of Don’t Ask, Don’t Tell in which the school just does not want to know about cheating. They’re in active avoidance of information, preferring to affirmatively steer faculty away from detecting it.
And, mostly just because it still irks me, I’m compelled to throw in here that it was a professor at UT Austin who, in 2022, wrote an entire piece in Inside Higher Ed with the title, “AI-Generated Essays Are Nothing to Worry About” (see Issue 159).
Yes, it’s unfair to judge the mindset of an entire institution on one paper from one professor. I concede that. But sometimes, just once in a while, the apple does land squarely under the tree.
All in all, given the evidence we saw in the school’s numbers during the pandemic, given its decision to avoid proctoring its online assessments, given its decision to turn off AI detection — it’s clear to me that UT Austin simply does not care about protecting its academic integrity. Or, honestly, care to even know how big a problem it has.
UT Austin’s ostrich approach to its own integrity is ironic to the point of near humor when you remember that the school was one of the plaintiffs asking the Texas Supreme Court for the power to revoke degrees when cheating is discovered (see Issue 156) — a power they ultimately won. In that legal case, the state, on behalf of UT Austin, argued that a degree from the school would be deeply compromised if people understood that it could be cheated:
Ultimately, the universities argued that disregarding this authority would weaken their “reputations and the value of degrees conferred upon their students.”
You mean that letting people who cheated have degrees from UT Austin would damage the school’s reputation? Fascinating.
In thinking this through, I first thought — since UT can now revoke a degree for cheating, they’re going to be pretty busy. But then I thought — no, no they won’t.
More on the Grammarly, North Georgia, TikTok Student
Remember the student at the University of North Georgia who was accused of misconduct and reportedly suspended for, she says, using Grammarly on her assignment? For a refresher, see Issue 263.
As the story continues to circulate in the media, Grammarly has apparently turned up the rhetoric to cast attention away from how they are a generative AI company that can be — probably should be — considered cheating. Instead, they are now publicly saying that the case in Georgia was a failure of the AI similarity software, developed by Turnitin.
Maybe that was inevitable. I mean, if they want students to keep using their software, what else could they really say?
This clip from a TV station in Atlanta says:
Now, we're hearing from leaders at Grammarly who say it was a misunderstanding due to faulty AI detection software.
Grammarly outright blames the other guys:
"She used Grammarly as if she had a tutor helping her look at spelling, and grammar, and syntax, and overall clarity of her writing," said Jenny Maxwell, the head of Grammarly for Education.
Maxwell said while the company does have generative AI capabilities, which can create new content on its own, that's not the tool Stevens used. She said the faulty flagging was likely due to issues with AI detection systems.
"Plenty of studies have shown tools like this flag text as AI-generated even when no AI —Grammarly's or otherwise! — was used," a spokesperson from Grammarly added.
And:
"AI detection as a whole, as a software entity, is faulty," Maxwell said.
Pretty convenient, if you ask me.
The TV story also reports:
Now, Grammarly has partnered with [the student] to create a series of educational videos on Grammarly-usage in schools.
Very convenient.
“Big Four” UK Accounting Firms Ban Use of AI in Job Assessments
Yahoo! Finance has the story that the “big four” accounting firms in the UK have reportedly banned job applicants from using AI to apply. The opening paragraph is:
Job seekers have been banned from using ChatGPT and other AI tools to write their applications amid fears it will help them cheat the system.
A little more detail:
The crackdown on AI applications has been driven by fears the tools can be used to unfairly help people improve their odds.
Job hunters applying to KPMG and Deloitte must now confirm they have finished online tests without external tools such as AI.
PwC said it is reviewing applications to check for activity, which “undermines the integrity” of its recruitment operation and will take action against rule breakers.
A PwC spokesman said: “While AI, including GenAI, can be useful in research, we tell candidates they should not use these tools during any assessment.”
BDO, the UK’s fifth largest accountancy firm, said graduates and school leavers were “strictly prohibited” from using AI-driven platforms such as ChatGPT under recently updated application guidelines.
The mid-tier accounting firm, which hired nearly 600 trainees last year, has installed plagiarism checkers that review phrases and paragraphs and cross checks them against typical AI responses.
BDO’s AI detection tools also flag duplicated answers across several candidates and check exceptional test scores against the time it takes for applicants to complete online exams.
That’s the first instance I’ve seen of an employer using an AI checker in the application process. But I am sure it will not be the last. And it makes university decisions on AI seem wildly out of touch.
Also, these policies may come with a side of irony since many of these same big accounting firms have been sanctioned and fined for allowing their employees to cheat on certification exams (see Issue 176). Though maybe it’s not irony at all. Maybe it’s reactive prevention, that these firms are now investing in keeping shortcutters off their payrolls in the first place.
Class Note:
I’m pleased to share that I will be speaking at the Oklahoma Association of Testing Professionals (OATP) conference in September.