McGraw Hill, Please Call the Office
Plus, research: test scores higher in unsupervised, online exams. Though that's not what the headlines say. Plus, student in Nigeria beaten for not cheating.
Issue 229
To join the 3,428 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 14 amazing people who are chipping in a few bucks a month via Patreon. Or joining the 15 outstanding citizens who are now paid subscribers. Thank you!
McGraw Hill, Please Call Your Office
If you’re not familiar with McGraw Hill, it’s one of the big three publishers of academic content such as textbooks. They also create corresponding tests and test banks used by countless teachers and professors.
Spotted by a regular reader and academic integrity sentry, McGraw Hill has a problem.
But before we get to it, I’m not picking on McGraw Hill. I have no doubt that every test bank is compromised - every single one. Thirty seconds after any question is used in any exam, especially one online, it’s gone. Chegg has it. Course Hero has it. Anyone with any programming chops is selling it. Not maybe. Not sometimes. Every time.
That’s the core of the Chegg/Pearson legal challenge - whether someone can legally sell the answers to another person’s questions (see Issue 55).
Back to McGraw Hill.
A reader found and sent me this YouTube video ad for a product called SmarterBook. From the look of things, it’s no sophisticated outfit. In their demo, the person has some issues logging into their own system. However, after activating the browser extension, and agreeing to a free trial (the screen says it’s $1 a week after that), it says:
Complete payment and return to McGraw-Hill
After which, the demo walks through several visible McGraw Hill test questions and simply gives the answer, on-screen, in real-time. The student types nothing. The question pops up, the answer too.
The video text kindly says:
SmarterBook works on top of McGraw-Hill
We made sure to make the extension undetectable
I have to admit, that is pretty thoughtful.
They go on:
So there’s no need to worry about getting caught
Enjoy our extension without any stress
Those guys - they shouldn’t have.
After running through another handful of questions and showing how they give you the answers, maybe 12 in all, the video ends.
Yes, it’s exactly that easy to hack test bank questions - McGraw Hill’s or anyone’s.
For a buck a week, it seems you can get perfect scores on any McGraw Hill quiz or test you want. That’s a bargain if you ask me. But it’s indicative of the level of academic fraud that has become industrialized. Right there, on YouTube.
More seriously, it’s not just a symptom. This tool makes McGraw Hill assessments functionally worthless. Which, in turn, renders the grades, courses, and degrees based on them worthless too.
Again, this is not just McGraw Hill’s problem. The entire system is blind and lethargic. As an example, buried at about the :40 mark in the video is a glimpse of the SmarterBook interface that tells users:
Press Command + Shift + Y to Save Quizzes on Canvas to Quizlet
So, in three quick keystrokes, McGraw Hill test bank questions are being exported from Canvas to cheating company Quizlet. Maybe Canvas needs to call their office too. Or someone needs to call YouTube. I mean, if someone were stealing my IP and selling it on top of my platform for a dollar a week, I’d call someone. If I were using McGraw Hill, I’d call everyone.
But that’s me. The video has been on YouTube for four months.
Research: Online, Unsupervised Test Scores are Higher Than Proctored Assessments
While that is what this new research found, it was not their headline.
And, while I am perpetually behind on covering research related to academic integrity, the misleading headline in this new work - and the corresponding grossly misleading early coverage of it - caused me to jump it to the head of the queue.
The research is by Jason C. K. Chana and Dahwi Ahn of Iowa State University and I have to say at the outset that I found their writing objective. It’s not obvious that their dispositions ran one way or another here, which is great.
What’s not so great is the headline that the Iowa State News Service put on their new research:
Researchers find little evidence of cheating with online, unsupervised exams
Unsurprisingly, this is not at all what the researchers found. More on point, it was not even what they were examining. But the headline, as mentioned, did get my attention. And were it true, it would be big news since it would contradict a growing pile of evidence. But, again, it’s not true. So, here we are.
The actual headline on the research is simply different:
Unproctored online exams provide meaningful assessment of student learning
Very different.
To summarize, the research took advantage of the shift in learning modality in 2020 from in-person to online and looked at whether the mode of assessment delivery changed the order or rank of test scores on a per-student basis. In other words, the research wanted to know if a “C” student suddenly became an “A” student when assessments moved online. Or if students who showed top-level work, suddenly slid to the midfield when assessed online and unsupervised.
They write:
We believe that the critical question is whether online and in-person exams provide a similar evaluation of learning—that is, would students who perform well on in-person exams also perform well on online exams?
Generally, the work found that the mode of delivery did not alter the relative standing of students within a class. Generally.
The core finding is that scores earned on in-person assessments correlated to those earned in later, online varieties. A-students tended to continue doing well regardless, for example. And the finding was correlation - not lack of difference. No difference is probably too high a standard. But simple correlation is pretty loose.
Nonetheless, this is interesting and does speak to, as the researchers wrote, that even online and unsupervised tests and assessments can have value - that they can be meaningful. Given that the alternative is that they are not meaningful, I’m not sure that’s big news.
More central to what is sure to be a misused and misquoted finding, the study did not look for cheating in online test deliveries. Cheating was simply not the question the study sought to answer.
In fact, given the finding of correlation between the two modes of assessment, the research team even considered that online cheating was so common that it provided no real value, writing:
An additional possibility to consider is that practically everyone cheats during online exams.
Though they discount this somewhat by saying that if everyone benefited x% by cheating, those who did less well initially would still show up more favorably because they had more room to improve. I guess. Though it’s not a stretch to me that, if everyone cheats, those who study and cheat will do best. Those who just cheat or just study, moderately well. And the students who can’t bother to study or cheat, probably won’t do well no matter what. Said another way, you’re probably a better cheater if you know the material in the first place.
To underscore, their finding is about modality similarity, not cheating. They write:
we showed that online exams produced scores that highly resembled those from in-person exams at an individual level despite the online exams being unproctored
Pay note to the “at the individual level.” As well as to the “highly resembled.” But mostly to the “individual level” bit. This was by design:
we argue that it is more informative to examine the similarity in how students perform on online and in-person exams at an individual level (i.e., correlation) rather than at a group level (e.g., comparing means)
I’d argue that comparing overall scores across the different modes is more insightful, but that’s fair. I’d also argue that if cheating helps a C+ student get a B- and slides a single student in the other direction, that may show up in data as correlated scores, but it’s wrong.
The bigger thing is that the research team did, in fact, calculate the mean grades across both in-person and online, unsupervised assessments. They just didn’t include it in their published report. Instead, in a completely separate file labeled, “Supporting Information” (available for download here), they do share the mean scores for both types of assessment. And - surprise, surprise - students got better grades in online, unsupervised assessments.
I know, shocker.
They write:
our data showed that students scored modestly higher on online exams than on in-person exams
And:
In raw percentage terms, students scored about 5% higher on the online exams than on the in-person exams.
Five points - more than enough to make a borderline “C” into a respectable “B.”
They also write:
it is possible that the score inflation observed here was an underestimate
The authors also note that multiple-choice tests showed more score improvement than open-ended responses, though both improved. Further, they also calculated first-half scores with second-half scores in the same courses when the entire semester was in-person. Sure enough, no jump in scores between first-half tests and second-half ones.
By the way this confirms nearly all of the recent and existing literature on this topic - that when tests go online, scores go up. You may guess why.
Anyway, the centrality of this research seems to be that it really doesn’t matter if everyone is cheating in online tests because the good students will still show up. That may be true. But if the point of academic assessment is to actually assess learning, that’s not good enough. If you’re just measuring who cheats best, I think you’ve lost the narrative.
There is also some serious squishiness with their definitions as they break assessments into only two groups: in-person and online/unsupervised. It’s possible that no student in the study groups took an online exam that was proctored, but that’s unlikely. The authors do refer to online proctoring. It’s just not clear in what column these results were tabulated or if they were at all.
What is clear is that in no way whatsoever did this study find “little evidence of cheating online.” At best, this study found that whatever cheating was happening online, assessments given online still had academic value because the cheating did not significantly affect the rank-order of students within a given class.
I’d argue - did already argue - that cheating having any impact at all on grades and/or rank order is too much. I’d also argue that my headline is the correct choice.
Nigerian Student Beaten for Not Sharing Test Answers
News in Nigeria is that a student was beaten and taken to a hospital for apparently not sharing his exam answers with classmates.
The story has video. I did not watch it.
The report says:
In a video trending online, the student was bleeding from his head after being hit with bottles outside the exam hall. A witness claimed that the student had covered his answer booklet when his colleagues were asking him to open it, and that he was beaten up for it.
He added that the student was facing the consequences of failing to do the “reasonable thing” expected of him.
A Thank You
After putting out a call recently for volunteer proofreaders and copy editors, I want to thank the three people who raised their hands and have been improving the quality of your reading by fixing the marginal quality of my writing:
Stacey Nash, at Central New Mexico Community College
Sarah Butler, at University of Missouri-St. Louis
Holly Banes, at Arizona Western College
Thank you. Really, I am honored. I appreciate them and their work.