The "Rise in Academic Misconduct" in Canada
Plus, a look at online oral exams. Plus, TechCrunch fails spectacularly.
Issue 191
To join the 2,973 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 11 amazing people who are chipping in a few bucks a month via Patreon. Or joining the two outstanding citizens who are now paid subscribers! Thank you!
The “Rise in Academic Misconduct” in Canada
The Globe and Mail had a story recently about cheating and academic fraud in Canada. It’s probably behind a paywall. But if you can get it, it’s well worth a read.
It starts with the story of a student at the University of Toronto who, it says, paid $60 for someone to take his first-year accounting exams. The paid “tutor” used the student’s credentials and logged in to an online exam. No one noticed.
It seems that the only reason anyone found out was that the paid fraudster, irked by a complaint about the test score, e-mailed the professor to turn in the student. And, the reporting shows,
When [the professor] confronted the student, he apologized and begged for leniency, blaming the strain of the pandemic.
Five days later, though, he hired someone to write his environmental science exam for $400, only to be caught by a teaching assistant.
And there it is.
To the credit of the University, the article says this particular student was suspended for five years.
Though I also must ask - how good is the University of Toronto exam security if someone can impersonate a student and the only way the school finds out is when the impersonator confesses? It’s rhetorical.
And I obviously will not resist pointing out that “blaming the strain of the pandemic” was rationalization - an essential element in academic misconduct.
Maybe more significantly, the article broadens to include:
His case is just one of thousands of examples of a large and troubling trend exacerbated by the pandemic. More students appear to be breaking academic integrity rules than in the past, and more are getting caught.
At the University of Alberta, the number of academic discipline cases doubled in 2020-21 compared to the previous academic year, to more than a thousand. The number of cases also more than doubled at the University of Saskatchewan, and nearly doubled at McMaster.
I’m not sure how much of this is original or new reporting but the trendline is incredibly consistent wherever you look - doubled, more than doubled, nearly doubled.
It continues:
At the University of British Columbia, more than twice as many students appeared before the president’s advisory committee on student discipline than in the year before. And at the University of Toronto and Queen’s University, cases doubled over the two-year period from 2019 to 2021.
More than twice as many, doubled.
And this, I think, is new:
At the University of Toronto, there were 3,092 cases in which sanctions were imposed on students in 2021-2022, a 15-per-cent drop compared to the year before. Still, that’s 95 per cent higher than in 2018-19, the year before COVID-19. And it represents more than 3 per cent of U of T’s student population.
Those, again, are just the cases in which sanctions were imposed and we know that more than 90% of academic integrity breaches go undetected, uncited, unsanctioned. So, consider that 3% accordingly.
The story - and it’s a really good story - also has this important nugget:
The contract cheating industry was estimated by scholars a few years ago to generate more than a billion dollars annually, worldwide. A more recent estimate by a Canadian academic, Sarah Elaine Eaton, puts the figure closer to US$21-billion annually.
Twenty-one billion seems right, especially if you include the likes of Chegg and Course Hero and Quizlet and Bartleby. In fact, including those players, I’d wager that $21 billion may be low.
The piece has a long section on professors reporting misconduct. Even when they find it, most don’t report it. It also touches on so-called diversion programs run by schools to keep academic integrity offenders away from formal hearings and harsh penalties:
One such program, run by the library at UBC Okanagan, had 335 per cent more referrals in 2020-21 than it did in the previous year, according to a report to the university’s senate.
There’s much more. The piece is long and good and, in my view, quite accurate.
A Look at Online Oral Exams
Times Higher Ed, which once again gives more time and space to academic integrity than any other publication, has a recent story about the use of online oral exams to prevent cheating.
The article is by Temesgen Kifle and Anthony Jacobs of the University of Queensland. It starts strongly:
The post-Covid rise of online classes and timed online exams raises the question: do non-invigilated online exams meet academic integrity requirements? The general answer is no because such assessments offer more opportunities for cheating and make it more difficult to identify occurrences of dishonesty.
In my view, that’s true. Online exams that are not proctored are so susceptible to fraud that they cannot meet academic integrity requirements.
The authors also say:
However, online proctoring software can both fail to identify suspicious behaviour (false negative) and flag non-suspicious behaviour (false positive).
That’s not really true. Or it’s at least misleading. I do believe proctoring systems miss suspicious behavior that’s likely misconduct. There is research to support that. There’s less evidence they flag non-suspicious behavior. And even when/if they do, a flag does not mean anything beyond that further inquiry is in order. Either way, it’s illogical that proctoring systems can both miss suspicious behavior and flag the innocuous.
Either way, the authors go on to relay their experience in delivering online oral exams in “a large postgraduate course” in which online assessments were not invigilated/proctored. The enrollment was 293 students.
Interestingly, the instructors say they did this specifically to verify the authenticity of the exams and also told students in advance that oral assessments would be required, specifically to verify integrity in the online exams. They told students:
a 15- to 20-minute oral assessment will be organised to verify whether the work (project assignment) submitted has been completed by you
Those who failed could be referred for academic integrity inquiries, they told students.
My hunch is this warning alone cut down cheating significantly. We’ve seen similar policies reduce cheating, even when the warnings were bluffs.
Most of the struggle with the oral reviews, the writers said, was logistical. They used Zoom and had tutors administer the oral assessments, which is not great. But trying to do this with nearly 300 students would be a major challenge no matter what.
They report that of the 293 students interviewed, only eight were flagged, which is odd since they also report that:
As English was not most students’ first language, they struggled to answer the oral questions.
In the second semester, the professors decided to limit the oral security checks to those who scored well on the non-secure assessments:
So, instead of interviewing 354 students, we invited only 90 students to attend a 15-minute mandatory oral interview.
That seems smart.
Personally, I think these types of verbal assessments are going to be increasingly vital in verifying mastery or checking the authenticity of student work. Like nearly all forms of assessment, they are not cheat-proof. But they are considerably more difficult to compromise than other forms, if administered well.
Problem is, with class sizes of 354 and 293 - that’s just not going to happen.
TechCrunch Fails Spectacularly While Trying to Say AI-detection Fails Spectacularly
A few days ago TechCrunch ran an article with this headline:
Most sites claiming to catch AI-written text fail spectacularly
Even the best of the bunch missed some
But the first problem is that they tested some of the absolute worst solutions out there:
OpenAI’s own classifier, AI Writing Check, GPTZero, Copyleaks, GPT Radar, CatchGPT and Originality.ai.
Junk they found from “a Google search.”
Then, instead of using text generated by ChatGTP - the company that’s in the name of three of the seven sites they tested and literally owned by another - they used a completely different text generator. That’s like saying you’re going to test security cameras to see if they can spot check forging then, when they do not, saying they failed.
Incidentally, even before that, they misidentified GTPZero as “ChatZero.” I guess the TechCrunch editors were on vacation that day.
Anyway, even TechCrunch wrote it was:
admittedly not the most thorough approach
I think not.
Nonetheless, they go on to ask:
So why are AI text detectors so unreliable?
They mean that these particular bad few AI detectors were unreliable in this complete nonsense of a test. That’s fair, I guess.
But based on this article, TechCrunch ought to avoid calling things unreliable.