(388): Submission: On Proctoring
Summer submissions are open. Allie Long of Caveon shares one.
Issue 388
Subscribe below to join 4,789 (+23) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
As you likely know, the primary writer and publisher of The Cheat Sheet is on summer break, until August 12 or 14. Until then, readers and others are invited to submit articles, commentary, press information, product updates — anything related to academic integrity. Contributions can be submitted to Derek (at) NovemberGroup (dot) net
The below is an unedited submission from Allie Long, Brand + Creative Manager at Caveon, who wrote:
“I’ve been in the testing space since 2016, working in corporate training and certification with GE before moving over to the vendor side to work in Quality Assurance in the testing SaaS space. I was trained as a journalist at Walter Cronkite School of Journalism and Mass Communication at Arizona State with a master’s in mass communication from the same institution.”
Hi there!
I’m Allie Long (long time listener, first time caller, as it were). I’m a big fan of The Cheat Sheet. I’ve been working in the testing space since 2016, and I’ve spent the last seven years with the world’s leading company dedicated entirely to exam security, Caveon.
For those among your readers who’ve never heard of us, our work is to empower testing programs to deliver secure, valid, and fair exams. So, in essence we help programs build security into their programs from development through delivery, doing things like data forensics, secure exam design, web patrol, test administration, incident management, and more.
One of the greatest impediments to Caveon’s mission, however, is a global over-reliance on proctoring. This is why I’m writing to you and your readers.
Traditional proctoring is expensive, invasive, unpleasant, and ineffective.
To complete a remotely-proctored, high-stakes exam, virtually all test takers are forced under the watchful gaze of a stranger in their private space. So as to avoid being unfairly flagged by AI bots for cheating, some test takers try not to move their eyes in the wrong way or (God forbid) do something neurospicy like stim with their hands or talk to themselves. Others are fearful a family member might barge into the testing space, resulting in a terminated test session. Technical glitches routinely interrupt sessions, compounding the anxiety.
On the other side of the screen? The testing providers pay through the nose to staff humans for a 1:1 live proctoring ratio. The kicker is—those proctors, through no fault of their own, are not even remotely equipped (or capable) to do the job we’re asking them to do—protect the very integrity of our tests and the validity of the scores those tests yield.
If you’re finding yourself feeling reluctant to accept or acknowledge the real downsides of proctoring, I encourage you to read this ridiculously in-depth article by our founder and C.E.O., Dr. David Foster, on the topic. It’s an informative manifesto-of-sorts that was necessary to explain such a complex philosophical position to an industry that’s been built around the pillar that is proctoring.
TLDR? Here’s the gist: our current proctoring model has us stuck between over-surveillance and under-protection.
As an industry, we’re in a bit of a scary space. We’re simultaneously both painfully aware of all the risks AI poses to the integrity of what we do, and yet we’re unable to visualize the better path forward.
So the question becomes: How do we protect exam integrity and test score validity without sacrificing fairness, dignity, privacy, or test-taker trust?
On August 20 Caveon is hosting a live, virtual launch to unveil our answer to the world. We call it Observer.
Our strategy was to resolve this two-pronged problem:
How do we remove the clunky, inaccessible barriers to success that are making testing harder for average test takers (most of which are introduced during the proctoring process)?
How do we do this while also ensuring that we are still protecting valuable test content and maintaining test score validity of some of society’s most important exams?
Because we’re a gaggle of nerds and overachievers—and because our executive leadership have been pondering the deepest questions in test security throughout their decades-long careers—our team had a bonus problem to address: AI is here to stay, so how do we swim with the current rather than against it to steer the AI ship in a more positive direction? Put more simply— how can we counter-apply AI so that it can be a force of fairness, integrity, and trust in our industry?
Enter Caveon Observer. This system harnesses machine learning and AI technologies to invert the proctoring paradigm altogether.
Hear me out.
The current proctoring model assumes all test sessions are equally likely to involve misconduct. Which is why, for testing programs, every problem-free test session costs the same as those wherein cheating occurs: despite the fact that most sessions are fair and harmless.
Instead of applying the same security rigor to every single test session, what if we could focus our human attention only on test sessions that are actually anomalous (read: where people are actually trying to cheat or steal)?
Early versions of AI Flagging in proctoring have been… less than optimal. False positives are common, causing test session interruptions and unnecessary human intervention. Proctors and AI flagging systems alike are still demonstrably unable to detect many forms of cheating and theft.
The patented Observer system inverts the traditional paradigm using AI-powered risk assessment. It is the first and only system that combines real-time data forensics of session data with test design analysis, in addition to reading the usual proctoring data streams. Observer’s unique focus on test metrics (not test takers) means it can more accurately detect and flag problematic test sessions. Human monitors are then assigned only when data indicates they are actually needed.
This alone is a big deal. We purpose-built a better detection tool using AI. Huzzah!
But remember the whole extra-credit bonus problem we were tasked to solve? This is where randomly parallel testing comes into play. We envisioned a way to build an un-steal-able exam using AI in the development and delivery processes as well.
Renowned psychometrician Frederick Lord posited, in the year of our Lord NINETEEN-FIFTY-FIVE, that randomly-parallel forms delivered to test takers from an extremely large item pool (read: randomly parallel tests) would be both a secure and psychometrically-sound way to deliver exam content.
*cue beeping monitors*
Folks, we can build them. We have the technology.
Using secure test design principles made possible through AI paired with Observer as a monitoring system (we’re calling this power couple “Observer PLUS”), it is now possible for testing programs to deliver a smooth testing experience to test takers and dramatically improve the security of exams from within, all while drastically reducing waste and overall program costs.
We envision a future where tests protect themselves and test takers simply need to show up and try their best. Advances in our technology keep bringing us closer and closer to that future, if we can only keep visualizing it, keep embodying it.