New York Times Seeks Bad Proctoring, Finds Probable Cheating
Plus, NCTA defines "proctoring." Plus, another contract cheating ad.
Issue 122
To subscribe to “The Cheat Sheet,” just enter your e-mail address below:
To share “The Cheat Sheet:”
Thanks for sharing “The Cheat Sheet.” If you enjoy or support my work, please consider chipping in via Patreon:
New York Times Writes on Remote Test Proctoring and Digital Learning
Issue 110 noted that the New York Times had issued an odd call, looking for people who were “mistakenly deemed cheaters” by “monitoring software.” At the time I’d said that the request was so biased, so misinformed that I could not wait to see the article.
But wait I had to. More than a month later, the New York Times published a long piece on remote exam proctoring and digital exams. And though the paper was looking for students who’d been falsely accused by “monitoring software,” it says everything to me that they weren’t able to find anyone - or at least that they didn’t report on a single such instance.
After a month, with the power and reach of the NYT - nothing.
In fact, the single student they did highlight, a high school student taking a dual-enrollment course at a community college, probably cheated. Which means the proctoring provider - Honorlock in this case - probably got it right.
Though the Times doesn’t say that, of course. They obviously wanted to tell a different story.
According to the paper though, the highlighted student said she was,
wrongfully accused of academic dishonesty by an algorithm.
That’s in the first paragraph.
Later in the story, the Times reports that the student’s test conduct was “flagged” because she:
scrolls through [the first] four questions, appearing to look down after reading each one, once for as long as 10 seconds. She shifts slightly. She does not answer any of the questions during the 50-second clip.
And that:
What the artificial intelligence technology got right is that she looked down. But to do what? She could be staring at the table, a smartphone or notes. The video is ambiguous.
She could have been staring at a smartphone. Or notes.
The Times also reports that, after reviewing the video of the test session, the instructor and the school’s Dean determined she violated test rules and awarded her a “zero” on the test in question.
All that is to say that, in this case, we have a student who looked in her lap for nearly a minute without answering a single question - behavior that almost certainly would have triggered the attention of a live proctor or teacher had the test been conducted in person. But then, through the benefit of having a recording of the incident, two teaching authorities reviewed it and determined the conduct worthy of a sanction.
That’s how the system is supposed to work - suspicious behavior is detected, a human checks it, teachers make the call. That’s how it works during in person testing and I’ve never been sure why anyone expects the process to be different when a test is online. The only indication we have that this particular student did not cheat is that she said she didn’t and, frankly, denial is pretty predictable.
Moreover, if this is actually a case of inaccurate determination, it’s the teacher and Dean who made the error, not the software. As reported, the “technology got right” that the student was deviating from the established test rules.
My point is that, after a month of looking for someone to say they were inaccurately identified or unfairly treated by a remote proctoring system, The Times presented a student who was correctly flagged and, after review, probably justifiably disciplined by her teachers. The story does not say if the student pursued an appeal. It does note that she graduated.
And even though the process probably worked correctly in the example given by The Times, the story is nonetheless designed to give a very bad impression of exam proctoring, offering up that proctoring technology is similar to:
surveillance methods used by law enforcement, employers and domestic abusers
What?
It quotes people who are simply not credible to comment on exam security. And the story gets factual, confirmable things completely wrong. It says, for example, that someone examined:
online proctoring software that medical students at Dartmouth College claimed had wrongly flagged them
The problem is that, as I have pointed out many times, the software in question at Dartmouth medical was not proctoring software. It was not designed or intended for use in detecting exam cheating. Dartmouth does not proctor exams - remote or in person. Period. They cannot, do not and did not use “proctoring software.”
Even so, nothing from the Dartmouth medical school incident means the students in question did not cheat. They “claimed” to be wrongly flagged. But see Issue 33 - the New York Times itself wrote that,
some students may have cheated
Maybe they did. Dartmouth cannot know because they did not proctor their exams. That’s not evidence of bad proctoring software, it’s evidence of bad proctoring process.
After all that, the story really is not as bad as it could have been, even including the incredible bias, absent examples and errors. Given that it’s the New York Times, it’s probably worth a read.
National College Testing Association Defines Test and Proctoring Terms
The National College Testing Association (NCTA) issued a series of definitions of terms common in testing, proctoring and academic integrity.
Surprisingly, it seems no one has tried to standardize them before. As such, the NCTA contribution is decidedly helpful.
Most notably, the NCTA says that a proctor is a person and, ergo, that “proctoring” an exam - whether remote or in person - must be done by a person. If a security and observation process of assessment sessions is not done by humans, in real time, NCTA says it’s actually “monitoring,” not “proctoring.”
It’s not clear whether the NCTA made a distinction between “monitoring” sessions in which a human proctor reviews a recorded session of the exam and those in which no human is involved in the exam process whatsoever - where the distinction is not human or machine but rather the sequence of engagement.
Even so, having a standard definition set to use or discuss is a big, necessary advance.
Another Ad for Essay Mills
It’s probably possible in every Issue to share an example or two of ads for essay mills disguised as news articles. They are incredibly common.
I add them here from time to time partly because they can be amusing but mostly to remind that the companies and organizations that sell cheating products and services are active every day - marketing, selling and engaging students. They are investing real dollars and real hours in finding customers and changing the thinking around misconduct.
This example, from an outlet called “OneSongMagazine” is a good one. Its title is incredibly direct:
How to Buy an Online Essay
Like most such ads, it sells students on their stress:
Purchasing an essay online is a good way to relieve your stress levels
And:
A quality writing service will offer a money-back guarantee, so you can rest easy that you’ll get what you paid for. You won’t have to deal with stress and cramming for exams or a high-paying job. Additionally, you can spend time socializing with friends and having a good time. While buying an essay online may seem like a petty choice, it can actually save you time in the long run.
Rest easy. Go have a good time. Save time. Don’t deal with stress.
These ads, confusing panels and events with educators, sponsorships of athletes and sports teams all share one goal - to convince students that cheating is easy, normal and smart.