A 1000% Increase in Serious Integrity Violations in Australia, Integrity Departments Overwhelmed
Yes, one thousand per cent. Plus, ChatGPT is "a machine for producing crap." Plus, Wells Fargo fires work cheaters.
Issue 302
Subscribe below to join 3,873 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 18 amazing people who are chipping in a few bucks via Patreon. Or joining the 39 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month. Thank you!
Australia: Academic Cheating Increases Overwhelm School Integrity Departments
This story is from Australia, but it fits just about everywhere.
From the coverage in the Sydney Morning Herald (SMH):
New figures reveal Sydney University recorded a 1000 per cent increase in serious academic cheating referred to the registrar between 2021 and 2023, with extra resources needed to get through a backlog of cases.
Also from the story:
Thousands of students have been accused of cheating and paying others to do their work as the record number of misconduct cases forces universities to beef up their investigation departments.
And:
Australia’s higher education watchdog, Tertiary Education Quality and Standards Agency (TEQSA), has warned that cheating companies run by criminal syndicates are becoming more aggressive in their pursuit of students and are even making threats against investigators.
Here is where I mention that most of the rest of the world does not have a higher education watchdog — or anyone — paying attention to cheating. In the United States, for example, most of the time, we just pretend cheating is not happening.
Back to Australia, a significant jump in the caseload is very likely linked to better detection methods and an increased willingness to address cheating. Those factors, in turn, were likely spurred by chronic and massive cheating during remote learning necessitated by the global pandemic.
At the same time, a dramatic increase in student cheating cannot be ruled out. The increase could well indicate both.
Either way, the numbers are bracing.
Consider this graphic, from the SMH:
It shows that in 2021, coming out of the pandemic, the University referred just 92 misconduct cases. Two years later, 1,038 cases were referred. This tells me that the school is getting better, more active in protecting the integrity of its degrees. But it also tells me how terribly the school was doing just two years before.
It’s also worth noting that generative AI — ChatGPT — came along late in 2022. Though I am sure that’s a coincidence. Cough.
And the numbers above are “serious” misconduct, though I am not sure what that means.
As schools catching AI use has increased, some persistent cheaters are turning to traditional essay mills and ghost writers when their AI cheating is not sneaking by.
SMH quotes Guy Curtis, an actual academic integrity expert in Australia:
“There are forums on contract cheating writers and I’ve seen them discuss that students have been caught for using AI and come back to have real people write things for them.”
Yup.
SMH also reports the cheating spikes are not limited to Sydney University:
At the University of Wollongong, substantiated allegations of academic misconduct rose almost 50 per cent in 2023 compared to 2022.
The university attributed much of the increase to a spike in misconduct on online exams and of the 526 matters, 406 resulted in a “low-level” outcome, and 120 in a “medium-level” outcome.
Substantiated allegations are up 50%. And “misconduct in online exams.” Imagine that.
Digging a bit deeper in the story we also get this, from Sydney University:
It said it increased resources to manage the record number of academic integrity breaches flowing through from 6608 cases in 2022 and 5076 new alleged breaches in 2023.
The university recorded 940 contract cheating cases in 2023, up from 444 in 2022.
That’s nearly 12,000 cases in two years. Good gooey grapefruit!
And it spells out the problem that all education institutions face — the complete inability to quickly, accurately, and fairly adjudicate so many cases of misconduct. Schools were simply not built to be massive hearing and judgement engines. It’s neither their mission nor their core competency.
Unfortunately, when schools see the costly and complex crush that can come with even a modest investment quality assurance, they go the easy, lazy, permissive route of relying on warnings and honor codes, which don’t work.
Dealing with cheating, accepting the challenges an integrity approach requires, is brave. Good for University of Sydney and University of Wollongong for putting their efforts in public view.
It makes me question further when we get this from other schools, via the SMH:
University of [New South Wales], which did not provide 2023 data or answer specific questions, said its efforts were focused on educating students on the appropriate use of generative AI, the risks of engaging with contract cheating providers and penalties where academic misconduct is detected.
Teachers: ChatGPT is “A Machine for Producing Crap”
Times Higher Ed (THE), which once again shows its leadership position on academic integrity, has a story on — what else? — cheating and ChatGPT.
THE starts:
The increased prevalence of students using ChatGPT to write essays should prompt a rethink about whether current policies encouraging “ethical” use of artificial intelligence are working, scholars have argued.
With marking season in full flow, lecturers have taken to social media in large numbers to complain about AI-generated content found in submitted work.
I wish, wish, wish that we could rethink “ethical use” policies of generative AI in teaching and learning. But, despite the evidence, I am sure we cannot. Too many people, too much money, have determined we must embrace it.
That’s an editorial aside.
The THE story recalls students submitting papers, neglecting to remove the AI prompts in the introductions. It also offers:
“I had no idea how many would resort to it,” admitted one UK law professor.
Des Fitzgerald, professor of medical humanities and social sciences at University College Cork, told Times Higher Education that student use of AI had “gone totally mainstream” this year.
And:
Steve Fuller, professor of sociology at the University of Warwick, agreed that AI use had “become more noticeable” this year despite his students signing contracts saying they would not use it to write essays.
Making students promise to not cheat is pointless.
Also from the piece:
Having to mark such mediocre essays partly generated by AI is, however, a growing complaint among academics. Posting on X, Lancaster University economist Renaud Foucart said marking AI-generated essays “takes much more time to assess [because] I need to concentrate much more to cut through the amount of seemingly logical statements that are actually full of emptiness”.
And finally, in what is sure to be in the running for this year’s Quote of the Year, we get:
“My biggest issue [with AI] is less the moral issue about cheating but more what ChatGPT offers students,” Professor Fitzgerald added. “All it is capable of is [writing] bad essays made up of non-ideas and empty sentences. It’s not a machine for cheating; it’s a machine for producing crap.”
Funny because it’s true.
The problem is, as a Professor Fuller points out in the article, crap is good enough for some students, so long as they pass:
“Students routinely commit errors of fact, reasoning and grammar [without ChatGPT], yet if their text touches enough bases with the assignment they’re likely to get somewhere in the low- to mid-60s. ChatGPT does a credible job at simulating such mediocrity, and that’s good enough for many of its student users,” he said.
Exactly. Many students will trade bad, passing grades for not having to do any work or any learning whatsoever. My question is why so many schools allow students to pass while clearly not meeting the minimum requirements. Effort, it seems to me, should be the bare minimum.
But anyway, professors are telling us — once again — that cheating with generative AI tools is a problem. I have no doubt they are right.
Wells Fargo Fires Employees for Faking Work
Spotted and shared by a regular reader and supporter — which is why I saw it — this story isn’t about academic cheating. I’m briefly sharing it anyway because I cannot help but see this story and academic misconduct as related.
The short story is that Wells Fargo fired more than a dozen people for using gadgets that simulate remote activity while the employees “worked” remotely. The company used work monitoring software to clock employee activity and engagement when not in the office and some employees used cheap tools to fake being at their computers doing actual things:
Devices and software to imitate employee activity, sometimes known as “mouse movers” or “mouse jigglers,” took off during the pandemic-spurred work-from-home era, with people swapping tips for using them on social-media sites Reddit and TikTok. Such gadgets are available on Amazon.com for less than $20.
Reasonable people may disagree about the WFH zeitgeist and whether companies should be monitoring employees in these ways. But the thread pulls back to academic cheating because this Wells Fargo issue should make the obvious clear — people are going to cheat, they will try to take advantage, find shortcuts to rewards while bypassing actual work.
There’s probably no way to know, for example, how many of the dozen or so terminated employees cheated in college. But I’d wager it’s more than a healthy cut. And, we may presume, their academic cheating worked — they passed, graduated and got good jobs. And kept right on cheating.
And why not?
It’s a very short sidewalk from schools looking the other way on cheating and letting companies such as Wells Fargo clean up the mess to those companies questioning whether the college-to-workforce pipeline is reliable. If you want restaurants to keep buying your apples, you must remove the rotten ones.
Too harsh? I don’t care.
I’ll also note this line, from coverage of the terminations in Axios:
But the use of surveillance tools poses a risk at a time when it's getting harder and harder to keep workers happy, Axios' Javier E. David notes.
To me, that’s through the looking glass. Businesses, it seems, want to keep good workers happy, those actually working. One way to do that is to not let colleagues collect paychecks for faking their work. Rewarding work, in other words, by rewarding actual work.
Whatever.
Good for Wells Fargo though. Saying cheating won’t be tolerated and rewarded is a simple good. It’s a shame it’s so rare.
Wells Fargo is upset about cheating by its employees? Surely this is irony. Cheating customers has been the Wells Fargo way of doing business, costing it billions in government-imposed fines. https://www.justice.gov/opa/pr/wells-fargo-agrees-pay-3-billion-resolve-criminal-and-civil-investigations-sales-practices
It had to pay $22 million for whistleblower retaliation. https://www.dol.gov/newsroom/releases/osha/osha20220901?lang=en
There are some stories about wage theft that WF had to pay back to employees. https://news.bloomberglaw.com/daily-labor-report/wells-fargo-settles-overtime-class-suit-for-35-million
Thus, management’s claims about being shocked to discover cheating (as Rick was shocked to learn there was gambling in his club in Casa Blanca) is a new level of deflection, or maybe finally some compliance. Too soon to tell which.
Let’s also find out about the surveillance Wells Fargo did against its employees, which has been common practice among employers. What unachievable metrics did WF set for its workers that perhaps made some cheat?, like the previous unachievable metrics that pressured bank employees to falsely open customer accounts for the previous big WF scandal.
Call out cheating employees, yes, and let’s put these actions in the context of Wells Fargo’s actions.