Two More Flatly Terrible Stories on Academic Misconduct
Plus, writing a 1,500 word essay in one hour with AI, fooling detection systems and earning a C+. Plus, a sales pitch stumbles on the truth. Plus, a class note.
Issue 192
To join the 2,982 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 11 amazing people who are chipping in a few bucks a month via Patreon. Or joining the two outstanding citizens who are now paid subscribers! I don’t know how that works, to be honest. But I thank you!
Two More, Flatly Awful Stories About Cheating that Somehow Made it to Print
Trying to not spend too much time on this, two recent stories on academic misconduct have been written, and indefensibly actually published.
One is by CalMatters, the other from HigherEd Dive. Both otherwise credible outlets appear to be cutting overhead by skipping fact-checkers and editors.
HigherEd Dive
The HigherEd Dive piece is from Emma Ross, who I assume is a student at Mesa Community College in Arizona. So maybe she gets half a pass. But that should not extend to the publisher, who really should know better.
First, there’s the preposterous headline:
Proctoring companies erode trust between students and faculty with claims of widespread cheating
To begin, I don’t know what more evidence we need that there is widespread cheating - there is widespread cheating. These are not claims. They are data. Arithmetic. And nearly all of it is coming not from proctoring companies but from schools. They are telling us it’s a major problem and what’s clear is that some people simply prefer not to listen.
And, having seen this line of lunacy before - trying to stop cheating erodes trust, not the cheating. I think of this every time I see a video camera in a gas station or bulletproof glass in a bank - how dare they not trust me.
Does anyone think for even a single second that banks actually want to invest in silent alarms or dye-packs or security guards or video cameras? Or do we think they feel it necessary because without such measures they’d be robbed all the time? Seeing a video camera in a bank and thinking that “erodes trust” between business and customer is - I don’t know - not based in reality.
I am sure the corner convenience store would like to trust me and take down their cameras and remove their door locks. I completely understand why they don’t do that.
The circus of a piece goes on to say proctoring companies:
have a notorious record of being unreliable, discriminatory, and more recently, privacy-violating.
Really? Name them. Tell me how.
She does not. She simply says it’s so.
She goes on to cite the court case in Ohio in which a federal judge sided with a student in saying that a pre-test room scan, as it was done by his school, was a violation of the Fourth Amendment. She does not say that the outcome was utterly inconsequential (see Issue 188) and not related at all to even a single other school or student. Or even to proctoring.
And she goes on further to get the fact of the case wrong, saying the student:
sued the university for requiring him to submit to an Honorlock room scan
That’s a basic, fact-checkable error. The school inexplicably used Zoom to scan the room, not Honorlock. Had the school used Honorlock, it’s possible the student would have had no case.
Then, clearly not having read the case or any credible summary, our author continues:
There is little consistency in how professors proctor online exams, and students may not expect or be ready for a full room scan ahead of tests, creating the potential that private information might be exposed.
This makes me crazy.
The lack of consistency in pre-test room scans was a major factor in why the student won his case. Had the school simply required room scans for all remote tests, the school probably would have prevailed. This argument, in other words, actually speaks to why schools should require room scans, not why they’re bad.
Moreover, the student in this Ohio case had two hours notice before his scan. He could have secured any personal information, as he was advised to do. He chose not to.
Later, our writer repeats that proctoring companies:
continue to push the narrative that cheating is widespread
It is.
Mercifully, though no less inaccurately, our author concludes with these gems:
Many professors and colleges have fallen for their effective sales pitches and use artificial intelligence flags as if they are undeniable evidence of cheating — instead of reviewing the facts and exercising caution before accusing students. What’s more, extensive research has shown time and again these proctoring services disproportionately affect students with disabilities and students of color, reporting them for factors out of their control.
Whether it be flagging excessive movement for students with Tourette syndrome who have motor tics or flagging darker skin complexion for students of color because it’s difficult for the AI to track, proctoring software continues to demonstrate inaccuracies that inflate cases of cheating.
First, no proctoring company anywhere, ever has made a pitch that AI can provide “undeniable evidence of cheating” so as to replace or stand-in for a review and thoughtful determination by a professor. This is pure fantasy. If you know of such a pitch from a proctoring provider, e-mail me. I’ll apologize.
And, no. “Extensive research” has not “proven time and time again” that proctoring services report students of color or those with disabilities disproportionally. Fantasy. As I’ve written countless times, any such discriminatory errors happen in the ID-check process, if they happen at all (see Issue 173). And even if they did, a flag is not an accusation of misconduct. This is pretty basic stuff.
There is zero evidence that proctoring companies flag students with Tourette syndrome or dark skin for misconduct more than others. Same offer - if you know of this “extensive evidence”, e-mail me. I’ll apologize.
Though the lack of evidence does did not stop HigherEd Dive from publishing the assertion that “extensive research” shows this. Inexcusable. Just no better word for it.
CalMatters
The CalMatters piece is by Itzel Luna, who appears to be on a journalism fellowship there.
Her piece has the headline:
California colleges still use remote proctoring despite court decision
She’s referring, of course, to the Cleveland State case related to a pre-test room scan (see Issue 188).
Here, please repeat after me - this case did not impact a single school or single student anywhere in the world aside from the student who sued, and cited a very specific, very bizarre circumstance. That California colleges should somehow stop remote proctoring as a result is - I am not sure. Is divorced from reality.
The author incorrectly describes the case as:
the first major legal test to the use of remote proctoring services in higher education
False.
The case was not about exam proctoring at all. First, the school used Zoom to scan this student’s room, not a remote proctoring provider. Second, this case was entirely about a pre-test room scan, checking the test environment for disallowed tools such as notes or cell phones or tutors in the room passing him answers before the exam proctoring started. It did not touch at all on “remote proctoring services.”
She continues that the Judge:
agreed in August that the room scanning was unconstitutional
Again, no. The Judge said that the scan, as done by this school for this student in this way, was a violation. Nothing more. “Room scanning” remains just fine so long as it’s not done the way Cleveland State did it - which the school says they don’t even do anymore.
“Yet,” she says:
some California schools are still using e-proctoring software that includes room scans.
Why shouldn’t they? They are both legal and a sound deterrent.
As an aside, I note again, the use of “e-proctoring” is a tell. Only a very specific group or ardent anti-exam security folks use this term. It’s a kind of linguistic fingerprint. And it’s no surprise to see it here, a mere six paragraphs before the story links to one of these anti-security groups with the text:
Critics have called e-proctoring software innately racist, flawed, and an invasion of student privacy.
Uh-huh. So there it is. Innately racist. Sure. What a joke.
In like paragraph 35, the author later corrects and clarifies that the case in Ohio had nothing to do with proctoring, addressing only the pre-test room scan, saying the case:
focused specifically on room scanning and did not address whether e-proctoring as a whole is permissible.
True. But her points were already made.
I’d like to be able to explain how something so inaccurate, misleading and biased makes it to publication but I can’t. I just can’t.
A Transparent Tech Sales Pitch Makes a Strong Point Anyway
UK-based feNews ran an op-ed this week from Chibeza Agley, who it describes as Co-founder and CEO at OBRIZUM, which in turn describes itself as a “digital learning transformation partner.”
To start, like so many education technology CEOs have done for so long, Agley presents things as if he’d invented them:
In this new future of education, students and learners who are able to demonstrate a higher level of understanding can accelerate their speed to competency, perhaps fast-tracking their way through the course and graduate at a much quicker rate than was previously achievable.
He’s describing personalized learning and competency based assessment, which have been around for decades.
Further he says:
The moment that universities become detached from the labour market, or where good performance in education no longer yields a competitive advantage, the system becomes unviable.
This is, obviously, nonsense - designed to scare school leaders into buying more technology.
But those are my personal annoyances.
His article is nominally about how new technology - AI and exam cheating - is changing education, which it is. To this, he shares this fundamental observation:
This, combined with the rise of ‘click through’ culture within universities, not only undermines the integrity of the entire education system, but also leads to further questions around whether the degrees being completed today are as credible as those that came before.
It’s not clear at all how personalized learning or competency-based education models will solve this. Unregulated and unsecured, competency programs are even more susceptible to cheating temptations.
Still, Agley is right about cheating. It, along with a “click-through” education culture, will destroy the credulity of degrees. Some would say they’re already doing that.
From New Zealand: I Cheated on a University Essay using ChatGPT
Press in New Zealand has the story of someone who, as the headline says, used ChatGPT to write an essay and passed.
The details are not quite as sexy as the headline - the professor was in on the attempted fraud, which invalidates the whole thing. And the fraudster got a C+. Though, for many students lured by the ease of AI text generation, they’d probably take it.
The story says:
I wrote a university essay in an hour and then handed it in to the lecturer who assigned it for marking.
Despite doing absolutely no preparation, and writing on a subject I've never studied, I wasn’t only able to finish the 1500-word essay, I passed with a C+.
Of note, it also says:
Worryingly, TurnItIn (the tool most universities currently use to detect plagiarism) found no concerns, and of two specialised AI-detection tools, one estimated a 99.98% chance the copy was written by a human
So, Turitin has not made their anti-AI detection tool available yet. I hear it’s coming. But that Turitin can’t find AI writing, as of today, should not be “worrying.” They have not yet claimed they can. They’ve said they will - but not yet.
But that another detection tool completely missed it does not surprise me at all, especially since our “writer” seems to have taken the original AI writing to a paraphraser, “a free AI writing assistant.” That will flummox most of the detection tools, though not the better ones.
And once again it seems that ChatGPT is prone to simply making stuff up. Although the faker wrote:
But I was a morally bankrupt student with an hour before hand-in time, so I included the example anyway – ChatGPT seemed to know what it was talking about.
But the kicker here is this, from the professor:
She was not surprised I was able to produce the essay I did within an hour, and said if I had been a real first year student, the warning signs would not have been enough to bring me in for an interview, or to accuse me of cheating.
If you needed further evidence that professors need good AI detectors to help them flag suspicious content for follow-ups, there it is.
It’s already possible to pass an assignment with AI, even to fool some of the detection systems and professors. This is, to suggest modestly, a condition in academic assessment that cannot be permitted to persist.
Class Note
If you read “The Cheat Sheet” and you’re an investor in education technologies or companies - either personally or institutionally - please let me know. I’d like to know you. A reply e-mail reaches me.
Thank you, as always, for reading and sharing.