(421) Cheating on the SAT
Plus, another misleading article on AI, AI detection. Plus, ICAI, Denver, March.
Subscribe below to join 5,061 (-1) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year, suggested.
NYT: SAT Exam Questions Compromised
The New York Times has coverage outlining that questions from the, now digital, SAT appear to have been compromised — showing up in databases designed to help students “practice” the test. Or just cheat.
The headline is more than a little misleading:
Students Are Finding New Ways to Cheat on the SAT
That’s not supported by the story. It is likely true that students are cheating on the SAT with compromised questions and technology hacks. Still, the story presents that questions from the test repository were likely available in the open market, and that it is possible to cheat on the new digital version of the test. That’s it.
If you understand cheating or exam security, neither of these should surprise you.
Once a question is given on a test, it’s burned. In less controlled settings professors have regularly reported their exam questions being available on Chegg and Course Hero within the hour. Consider, from the article:
In some cases, metadata showed that questions were posted online almost instantaneously after the test
Providers of higher stakes assessments such as the SAT have known for a long time that their major concern is theft of exam materials — this very problem. Given what it costs to develop even one question, the issue is a serious one.
The second issue — the possibility to hack the digital SAT exam during the test — should make clear, yet again, that it is not possible to fully secure a digital assessment. Maybe it’s not possible to fully secure any assessment, but whatever the best possible threshold is for an in-person exam, digital methods fall short of it.
From the coverage:
Three years ago, after nearly a century of testing on paper, the College Board rolled out a new digital SAT.
If you’re not up on the story, here it’s important to distinguish between the exam being digital, which means taken on a computer, which it is, and remote, which it is not. Takers of the SAT have to personally visit a test center to take the test. The test however, is now on personal laptops — which is part of the problem.
Continuing:
Students who had long relied on No. 2 pencils to take the exam would instead use their laptops. One advantage, the College Board said, was a reduced chance of cheating, in part because delivering the test online meant the questions would vary for each student.
Now, however, worries are growing that the College Board’s security isn’t fail safe. Fueling the concerns are what appear to be copies of recently administered digital SAT questions that have been posted on the internet — on social media sites as well as websites primarily housed in China.
More:
The College Board said in written statements that SAT cheating is rare, affecting only a fraction of 1 percent of its test scores, and noted that overall test scores have remained steady after the transition to digital tests. “However, some students will always be tempted to cheat on high-stakes assessment, and bad actors are persistent. We stay hypervigilant,” it wrote.
I believe they do.
No exam is 100% resistant to cheating. Still, as mentioned, the likelihood of misconduct went up when the SAT went digital, not down. They had to have known that, despite their statements to the contrary.
Moving along:
In addition to having human proctors present during testing, the SAT’s Bluebook platform requires that other apps on a student’s computer be turned off.
But the organization acknowledged that it was aware of “screenshots that purport to have been taken while testing is in progress” as well as “hardware and software-based efforts to evade our security system.”
Not new. We went through all this with remote test proctoring like five years ago. Motivated people can bypass your digital security.
The NYT also provides some context:
Allegations of SAT cheating follow breaches in digital LSAT and GRE tests used by dozens of prestigious law and graduate programs to screen applicants. In those cases, the tests were taken remotely, not in test centers like the SAT.
Several graduate business schools, including those at the Ohio State University and the University of Minnesota, rescinded admissions offers in 2024 following allegations of GRE cheating by students in Ghana and Nigeria who had taken the tests at home, according to the online publication Poets & Quants, which writes about business schools.
We covered the LSAT thing in Issue 391.
But hold the phone — the GRE allowed students to take it at home? I hope I spelled “flabbergasted” correctly. That’s just — I don’t know. In all seriousness, why bother giving any test if you’re going to allow people to take it at home?
There’s also:
Cornell Law School has notified students that it is investigating a testing breach during a contracts exam in December. The school learned of the problem because a paid test taker — intending to advertise his services — posted screen shots of the exam online shortly after gaining remote access to a student’s laptop.
A spokeswoman for Cornell confirmed that its law school had investigated an “isolated incident of alleged testing misconduct,” adding that the school would not comment on disciplinary cases.
I’m not sure how they know it’s isolated. Feels more like self-soothing wishful thinking.
A bit more:
Rachel Schoenig, a former head of security for the ACT, the second-largest undergraduate admissions test, said the web is filled with ads from companies selling hardware and software, purportedly to help students cheat.
“Some are scams,” she wrote in an email. But, she added, “The reality is that no system is foolproof.”
As mentioned, true. And:
The fact that students are permitted to use their own laptops is one weak link that several experts flagged.
There is evidence that digital test security problems are spreading beyond standardized admissions tests, with companies operating online that offer to take college exams for students, for a fee.
Wait — companies offering to take college exams for a fee? You mean like what I wrote about in 2015?
Anyway, the digital SAT is probably compromised. There’s that.
Another Article on “Ethical AI” and AI Detection
Terri Smith has written an article published by the OLC — The Online Learning Consortium. OLC says Smith, “has more than twenty years of experience in the education sector. Currently, she is a technology faculty member at a college preparatory school.”
The article from Smith aims to tell educators, yet again, that they cannot trust, and therefore should be reluctant to use, “the AI.” Smith is misinformed and, as a result, misinforms.
The piece is not all bad. I like, for example, this:
Unauthorized AI use raises significant ethical concerns, particularly when it substitutes for independent work.
And this:
Unauthorized AI use … when defined as prohibited, is deceitful and thus unethical. After all, one does not accidentally open an AI chat and inadvertently gain a precisely written answer to an inquiry, and then paste the copied answer into a homework assignment.
Right on both counts.
But most of the rest of the article repeats incorrect or outdated assumptions and shows lack of logic. Smith says, for example:
On occasion, some AI detectors have misidentified student work as AI-created, thus making accountability for plagiarism or cheating problematic.
That is correct — some have. Not all.
But rather than examining, or even considering, how to avoid those inaccurate systems, Smith concludes that because some are bad, none can be trusted. That’s not logical, especially when the solution is so obvious — find and use the good ones. Instead, the piece contends we should stop farming because there are some bad apples.
In another example, Smith writes:
we must ensure that accusations impacting students’ academic records are accurate. But, how can we be sure if the AI isn’t?
We must ensure that accusations are accurate, as much as we can. But accusations themselves don’t mean anything. Findings, adjudications, consequences need to rest in accurate information, lest they be unjust.
Accusations are, for the most part, investigations, an opening of questions revolving around suspicion or curious data. These accusations/inquiries can be stressful for all involved, and significantly burden school, educator, and student alike. So, they need to be based on more than whimsy. Still, on their own, accusations mean nothing. Most are dismissed or lead to as little as a warning. I’m just not sure, in other words, that accusations impact students’ records, as Smith says. I’ll be stronger. I think that’s simply wrong.
Smith is also wrong to use “the AI.” There is no such thing. You cannot — should not — say “some” are bad, then lump them together as “the.” It’s an error made many times, such as:
Accusations of unauthorized AI use are problematic due to the unreliable outcome of the detectors
“The detectors.” And:
… the AI may have constructed a false-positive.
“The AI.” And:
until AI detectors become more reliable
“AI detectors.”
You get it.
There are several other issues. Here’s another:
AI Detectors are the AI tools that analyze the probability of whether or not text is AI-generated from ideas, concepts, conventions, and/or writing structures that appear to have AI patterns. Detectors are not reliable (yet) because the training for the AI models is still developing.
Someone could correct me if I am wrong here — several actual AI experts subscribe to The Cheat Sheet — but I do not believe any AI detection system examines “ideas, concepts and conventions.” I do not believe that either the AI that created the text, or the AI that finds it, understands the ideas and concepts at issue. If you know otherwise, please let me know.
I do know that Smith is wrong when she says “Detectors are not reliable (yet).” Good for using “yet,” and I note again the misleading use of one class of “detectors.”
But in this case, “yet” happened three years ago. There was never any evidence that all AI detection systems did not work. But all of the most recent research shows that AI detection by good, credible systems is both accurate and reliable.
I’ll end with this, from Smith:
Be careful! If a student completely denies AI use, then the evidence to counter the denial may be absent.
It’s a shocking thing to believe that, if a student denies using AI, there may be no evidence. I just don’t know what to say. It’s a premise I struggle to process. In what context does a denial invalidate evidence? That’s downright bizarre.
Since it’s obvious that readers cannot trust the writers of this material to be accurate, or even logical, I really wish that editors at some of these publications would be vigilant. There’s just no excuse for things such as this to find their way into publication and therefore into our conversations. But I wish for too much, obviously.
Registration Open for ICAI Conference, March, in Denver
A reminder, registration is open for the annual conference of ICAI - the International Center for Academic Integrity.
This year’s festivities will be in early March, in Denver.
If you or your campus may be eligible for one of the ICAI’s Integrity Awards, nominations are due by February 9.


I get so-o-o tired of repeating that I don't *have* to prove AI wrote it. I only have to prove the student *didn't*. And if AI *did* write it that's gonna be easy as pie.