Interview with an Essay Mill Founder
Plus, a too LONG piece on that research from The Netherlands that's making the rounds.
Issue 153
To subscribe to “The Cheat Sheet,” just enter your e-mail address below:
To share “The Cheat Sheet:”
If you enjoy The Cheat Sheet, please consider chipping in a few bucks via Patreon:
Magazine Interviews Essay Mill Founder, Writer
Input Magazine, which shares a nameplate with some big publication brands, has a recent interview with the founder of essay mill Killer Papers. It’s pretty good.
Among the newsworthy parts is that “Kevin” - fake name - says his company takes in seven figures a year and that he started writing essays for money in high school. From the article:
By August 2017, he was charging $10 per page, and he’d earned enough to quit his full-time job, which he hated. “That first year, I made $50,000 from the business,” he says.
And:
He now has around 60 writers who produce between 200 and a thousand papers a month, with prices ranging from $17.50 to $32 per page.
The other, not newsworthy but annoying, part is that Killer Papers says it’s not an essay mill but that:
“Killer Papers is a tutoring service,” reads a disclaimer on the site. “KillerPapers.org custom projects are not intended to be forwarded as finalized work for academic credit as they are only strictly meant to be used for research and study purposes. Killer Papers does not endorse or condone any type of plagiarism.”
Uh huh.
The interview goes on to include comments from a student who says he:
paid more than $1,000 for around 10 papers. “I know I spent a pretty penny, and that’s probably because I’m fortunate enough to have family support me on the financial side,” he says.
That’s one of the things that upsets me most about cheating profiteers and industrial cheating providers - the inequity they foster. Rich kids get breaks, the students who actually do the work get punished.
The Input piece also interviews a writer at Killer Papers who quit her teaching job to write essays full time. The piece reports:
When she’s asked how she feels about the morality of what she does, Courtney demurs. “We’re informing clients before they sign up that we’re not encouraging them to plagiarize anything,” she says. “And we’re not encouraging them to submit any work.” Kevin, Killer Papers’ founder, makes similar points. “They’re assigned limited usage rights,” he says. “Basically, it’s for inspiration or study purposes. You’re not allowed to reuse it or anything like that.”
Founder Kevin says he hopes to move Killer Papers into “bona-fide tutoring services” - as the magazine describes it.
One company - small or mid-sized by essay mill standards - making at least a million dollars, writing hundreds of papers a year.
That Dutch Study on Fooling Proctorio
There’s been some buzz in the past week or two about a year-old Dutch study in which six students were able to cheat, undetected by proctoring provider Proctorio.
Someone tweeted about it, others shared and commented and now it’s filtering though some dubious news media, such as Vice.
The paper is by Laura Bergmans, Nacir Bouali, Marloes Luttikhuis and Arend Rensink of the University of Twente.
Like most studies it’s misunderstood and, like all studies, it has limitations - some of them serious. But the biggest issue is that it’s being discussed and shared out of its larger context. It’s being presented as a study showing that proctoring does not catch cheating. That’s incorrect.
Join me.
Limitations
First the study’s limitations, which the authors credibly acknowledge:
The group of “cheating” students was six. The entire study, with control and compromise groups was 30. But the headline that Proctorio missed cheating is based on six students.
The students were not actually cheating. They were taking an exam that did not count. That means the “cheating” students were not acting like students who would be actually cheating. There was no penalty for being caught.
As the students were not graded for their effort, there was less at stake for them than in a real test. This might affect their stress level, especially for the group of cheaters, being lower than at an actual test and hence making it harder to detect cheats.
The experiment was conducted using just one company - Proctorio - and just one type of remote proctoring system, an automated AI-only system that flags conduct which may indicate misconduct.
we have conducted our experiment using a single tool, Proctorio, and nevertheless have used the results to draw conclusions about the general principle of online proctoring.
The authors say they believe, “that this is justified because Proctorio is representative of the cutting edge in tooling of this kind.”
The students self-selected their participation, meaning they were able to volunteer to try to beat proctoring, exposing potential motivation biases.
This could lead to participants that have a strong opinion about the proctoring, with increasing motivation to successfully cheat.
These were not regular students. They were computer science majors who were encouraged by the authors to “focus on technical cheat methods.”
Our experimental student group consisted of only Computer Science students. These are certain to be more technically proficient than the average student, hence this might have implications for the external validity.
The students were allowed to discuss their “technical cheat methods” with other students before the experiment. That could happen in cases of actual cheating. But given the illicit nature of cheating, it’s probably unlikely that students would discuss how they thought it best to cheat before an actual exam.
In a real situation, it might be less likely that potential cheaters seek each other out
First, major credit to the researchers for raising these issues themselves.
Even so, these are not insignificant limitations. What we have is a study in which the active group anchoring the finding - not catching cheating - is just six students. These students were atypical students, they were encouraged to use “technical” methods, allowed to collaborate in advance and were not behaving as a typical, actual test-taker may behave. More, the test was on a single system, which the authors concede they:
nevertheless have used the results to draw conclusions about the general principle of online proctoring.
I’m no research expert but that feels like a good number of limitations. Maybe too many to draw conclusions such as proctoring not catching cheating.
Unreported
In addition to missing many of the study’s limitations, many of its contrary findings have simply been overlooked. Of those, this one seems big:
About 75% of students state that Proctorio is a suitable option for remote assessment.
Twenty percent disagreed.
And this feels big too:
students are (in a clear majority) of the opinion that the use of Proctorio will prevent cheating.
Citing existing research on the effectiveness of remote proctoring - much of which we’ve reviewed in The Cheat Sheet - the authors say:
it confirms that the use of online proctoring has a preventive effect, as was also suggested in our own student survey
Even after trying to cheat proctoring, a majority of students found it “suitable” and agreed that it “will prevent cheating.” That seems important, but has been largely absent from the conversation.
I also did not see it reported anywhere that, in this experiment, Proctorio produced zero false flags. Of the 24 students who did not cheat, Proctorio flagged none - not even the ones who were instructed to “act nervous” but not actually cheat. In fact, it was a human reviewer, reviewing recordings of the sessions, who flagged a nervous non-cheater as cheating.
There were really two tests here - one to see if Proctorio would flag cheating-like behavior and one to see if it would incorrectly flag nervous non-cheating students. That seems like a study designed to produce one bad outcome or another. But even so, the Proctorio system clearly passed one of those tests. No one mentioned it.
Granted, they did not, according to the researchers, flag anyone. By the way, in looking at their data, I am not sure that’s true. Still, given the countless stories we’ve seen about how automated proctoring systems were falsely accusing students, I think it’s worth noting that there were none here - not even when students were essentially told to try to generate them.
I guess my point is that this same study, limitations and all, could easily have headlined: more research supports that proctoring prevents cheating. Or: in a test, automated proctoring did not incorrectly flag even “nervous” students.
I’m being hyperbolic. But you get my point.
Missing Context
This is probably the biggest reason this study needs a deep breath. Which means I probably should not have put it so far down. In any case - there’s some very key context missing here, information even the researchers may not have known. Though, I am betting they did.
Proctoring providers have been aware for some time that there’s a very specific thing you can do on your computer, a thing that has been nearly impossible for proctoring technology to detect. A year ago, it was impossible. It’s still a major challenge but newer fixes have dented its success rate.
For obvious reasons, the proctoring companies and academic integrity folks don’t like to talk about it.
But since it’s less of a sure-fire cheat now, and because it’s in this research paper anyway, the cheat is virtual machines. Segregating your computer into two virtual computers, walled off from even its own awareness, had been nearly impossible for remote proctoring companies to catch. Two years ago it was kryptonite.
It’s not easy to do, mind you. I certainly have no idea how to do it. But computer science majors do. And given a challenge to defeat proctoring software and a chance to discuss and plan their tactics in advance, venture a guess as to what many of our six “cheaters” did in this study. Yup, they used kryptonite. They created virtual machines.
From the paper:
several students used virtual machines
Several, of six.
By my count, at least four of the six used either a “virtual box” or “virtual desktop” and then “cheated.” It’s little wonder that Proctorio didn’t catch it. And if you’re honest, you’d have to confess that doing what they did is kind of cheating at cheating.
Here again is one of the stated limitations from the authors:
Our experimental student group consisted of only Computer Science students. These are certain to be more technically proficient than the average student, hence this might have implications for the external validity.
Exactly. I’m not sure how many typical students would know to do that, let alone how to do that. Or how many would have the confidence to try it when real consequences were at hand? My gut says not too many.
Taken together it shows that a small group of tech-savvy students who volunteered to try to bypass proctoring software and were encouraged to do so with technology - did. And they did it with a nearly foolproof hack known to technology types, in a setting where there was no penalty for failure.
Honestly, I am not sure what that proves.
Not All Cheating, Not At All
The thing is, the researchers kind of know that part. They write about proctoring that:
Granted that such “casual cheats” are prevented, what remains are the “technical cheats” such as the ones employed by our participants
This concedes that what was tested here was not general cheating, or as they say “casual cheats,” but a specific kind of cheating. Noting that their technology cheaters “mostly selected somewhat similar approaches” - creating virtual machines - the authors continue:
There are clearly some forms of cheating which would be so easy to detect using online proctoring — like sitting next to each other and openly collaborating — that they are automatically prevented, and in fact were not even tried out by our group of cheaters.
Alright, I have to scream. But I will have to wait because they also write:
There might be other cheat methods that were not tried out, to which our observations are therefore not directly applicable. We did ask the participants to focus on technical cheat methods because from a prior, more superficial check it had already become apparent that more traditional methods, such as the use of cheat sheets, are hard to detect with proctoring software.
First, I am not sure that makes sense. Due to a “more superficial check,” they decided that “more traditional methods” of cheating were not caught. So, when doing an actual study to see if proctoring works, they decided not to try those? What? That’s suspicious. Especially when they say they also did not try things that would be “so easy to detect.”
Ruling out some types of cheating because they’re so easy to find and ruling out others because a “superficial scan” told you they were too hard to find - that’s an odd place to land. And it means, quite clearly, that this study did not leave students to their own cheating ideas, as normal students would be. This study actively, by design, did not test the way many - even most - students try to cheat.
Now, one student did use their phone to take photos during the exam and texted them to a friend. That’s pretty traditional. And Proctorio did not catch it. That’s bad. Then again, neither did the human reviewer.
Still, this study actively bypassed “more traditional methods” of cheating to turn six motivated computer science students loose with a directive to “focus on technical cheat methods.”
When you really look for it, this is what this paper really found, these are the authors’ words:
Proctorio cannot reliably (or in some cases not at all) detect technical cheats that Bachelor Computer Science students can come up with
That’s it. I would have added the word “six” between “that” and “Bachelor.” But even without that, the above is the entire finding.
And yet this study being cited to support that all of “proctoring” somehow does not catch “cheating” - full sentence.
I’ll scream now.
Almost Done
Two final, faster points then I’ll shut up.
As mentioned, the study authors repeatedly concede that remote proctoring has a preventative effect related to cheating - that it stops cheating. They say:
Any such effect will have to stem from the perception of students that the chance of getting caught is nevertheless non-zero.
That’s pretty clear - proctoring works because it presents a risk of detection, if nothing else. And since everyone agrees that preventing cheating is better than catching it, I’m not sure what’s left to discuss.
Point two is that it’s really, really bad to use Proctorio’s system as a blunt proxy for proctoring. The system they tested nearly two years ago used AI alone to flag deviations and is widely regarded as the base model of remote proctoring. Some proctoring companies have stopped doing it entirely. Even two years ago there were better systems, not to mention that what Proctorio does now is better than what they were running then - these systems have evolved, improved.
That more than a year ago, six specific students foiled a system that no one uses anymore is - I am not sure what.
I will leave you with this, from the paper:
This is not the core topic of this paper, and merits a much longer discussion, but let us at least suggest one alternative that may be worth considering: live online proctoring, with a human invigilator watching over a limited group of students, and no recording.
They’re wrong about the recording part. That’s essential, for everyone’s protection.
But it seems that the authors cannot possibly believe that “proctoring” is ineffective if they actually suggest live proctoring. It’s not proctoring that they find at fault - it’s the kind of proctoring, when subjected to their highly unusual, very atypical test.
My point of all points is that this study in no way says that “remote proctoring” does not “catch cheating.” It simply does not say that. And I wish people would actually read it.
Class Notes
Give me a Break - The Cheat Sheet will be on vacation for two weeks. The next Issue will be next Tuesday. After that, the next Issue will be Thursday, October 6. Probably.
Policy Project - I’m trying to collect, review and analyze academic integrity policies. The goal is to be able to share some innovative approaches, best practices, the most clear language, up to date procedures - with a focus on remote testing and proctoring, and plagiarism.
So, if you know of an academic integrity policy or practice guide that you think is good or innovative or outstanding in some way and would share it, please do. A reply e-mail reaches me. Thank you.