(351) "Actively Unethical" - Most UK Universities Use Unsupervised, Online Exams
Plus, a "writing scandal" in Australia that may not be. Plus, an integrity job at VCU.
Issue 351
Subscribe below to join 4,546 (+20) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year.
Study: Most UK Universities are Still Using Online, Unsupervised Exams
Times Higher Ed (THE) has news coverage of a new research report showing that most higher education institutions in the UK continue to use online, unsecured, unsupervised assessments, despite their obvious and widespread corruption.
Yes, it was published on April first. No, this is not a joke. Although it’s so astounding, so implausible, that you may assume it is.
The research underlying the news coverage is here, from Phil M. Newton (no relation) and Michael J. Draper. Both are from Swansea University, in Wales.
The News Coverage
The headline of the article is awful:
Most universities still use online exams despite cheating fears
I do not understand why this is difficult. The issue is not fears. This issue is cheating. Students are cheating, greatly more so in online assessments — even more greatly when said online assessments are unsupervised. We know this (see Issue 243 for an example). Using “fears” minimizes it. We don’t need that word, delete it and you have the story. It’s added intentionally. It’s sloppy and dismissive. THE should do better.
In any case, the meat of the coverage is that the research:
found that more than three-quarters of UK universities are still using online remote examinations almost four years after the last lockdown measures were lifted in July 2021.
Of the 119 universities who responded to their Freedom of Information requests, 93 (78 per cent) said they still used online exams.
Nothing about this surprises me. However, I’d flag the presumed connection between “online remote examinations” and Covid-era education. Maybe the UK was different, but many schools were running online classes and programs — with full remote assessments — before the pandemic. The practice accelerated and became nearly universal during pandemic lockdowns and related policies. But one thing did not cause the other. So, expecting the demise of one to end the other is off.
But again, whatever. Seventy-eight percent is a big number.
Here is the part of the news article you should read twice:
[74% of the schools that answered the question] said they did not use any proctoring service while 23 (26 per cent) only used supervision or monitoring for some but not all of their examinations.
Only nine institutions (10 per cent) said they used a remote invigilation system for all of their online exams while 61 institutions (68 per cent) said they proctored none of their online summative assessments, with an average of 246 examinations per university going unsupervised.
I honestly don’t know what to say.
I guess, to the 68% of schools that reported not proctoring any of their online assessments — quit. Seriously, you are wasting everyone’s time and money. Your students, faculty, alumni, those who may need to hire qualified, competent employees — wasting time. And money. And by students, I do not mean just those taking classes online. If you’re going to a school that has unsupervised, unsecured online programs, your degree is compromised.
Continuing:
That “widespread” lack of invigilation should raise concerns about the “validity of [such] examinations as an assessment format and the quality assurance of degrees which include these assessments”, argues the paper.
Ah — yeah.
More:
[The researcher] Newton told Times Higher Education that many university policies “put their students in a no-win situation by using policies that tell students ‘you must not cheat’ but then not enforcing the policy”.
An unenforced policy is worse than no policy at all.
Also, from that other Newton:
Noting that “cheating is widespread in this type of assessment”, Newton added that “students are forced to choose – do they cheat, or risk getting lower marks than peers who did cheat, with consequences for employability”.
“This situation is created by universities but is lose-lose for students. It feels to me like actively unethical behaviour by universities and examiners,” he added.
Actively unethical.
Well said.
There’s also this:
the study also suggests its results could understate the lack of invigilation
I have no doubt.
The Paper
Moving on to the paper itself, linked above, kudos to Newton and Draper — not just for their work, but for their humor. The pair use the term, “summative online unsupervised remote” examinations. Yes, SOUR.
Clever. I’d go further and describe them as utterly useless at conveying any meaning whatsoever. Though that acronym is probably a little more tricky.
The paper starts by stating that:
The use of summative online unsupervised remote (SOUR) examinations during the COVID lockdown was associated with increased cheating
Thank you. It was. Although, again, I do not think this correlates to the pandemic as much as it relates to the mode of assessment itself. Either way, the correlation is iron clad at this point. It’s important to keep saying it.
We should all keep saying this as well, from the paper:
Cheating undermines the validity of assessment; if a student has achieved a grade via cheating, then the assessment was not a valid measure of their learning, and so any degree awarded on the basis of these invalid assessments is itself compromised
Cheating compromises the credential. For cheaters and non-cheaters alike. For online and in-class students alike.
Among well-known cheating methods in remote exams, the paper calls out Chegg. Again. By name:
students were posting their online exam question on sites like Chegg, in order to receive rapid answers from third parties, with Chegg acting as an intermediary for this transaction
Just saying.
Also, though not entirely related to the paper’s subject, I love this sentence in it:
it seems unlikely that an assessment will be a meaningful certification of learning if a student uses GenAI to generate it and then simply acknowledges that fact
(imagine thumbs-up emoji here)
I understand that legitimate research efforts are obligated to cite mitigating or even contrary views. Nonetheless, I think the paper does itself credibility damage when it includes things such as:
There is some, albeit limited evidence that online invigilation reduces cheating
All the evidence supports this (see Issue 169 for an example). There is no credible evidence that I have seen that invigilation or proctoring does not reduce cheating. There may not be a ton of work on this point exactly, but it is, as far as I know, uncontested.
The paper continues that proctoring online exams is:
associated with a poor student experience with concerns about privacy, efficacy, cost and more
Well, sure. Having people watch you is always less pleasant than having privacy. And it is, by definition, less private. That’s what limits the cheating. But more and more I think the real issue with proctoring online assessments is cost. Schools simply do not want to pay for the services. When making the choice between paying for the security cameras and leaving the door open, schools leave the door open. Schools can secure online exams, they choose not to.
What was the phrase? Actively unethical, I believe.
Then there’s also this, with citations removed for ease of reading:
Even then the effectiveness of GenAI detection tools is modest and easily reduced, plus the lack of any way of independently verifying the output creates a risk of falsely accusing students of misconduct.
This is mostly untrue. Or at least inaccurate.
To repeat, I understand it may be nodding to a counter view in the name of balance or whatever, but that does not make it worthy of repeating. Ninety-nine percent is not “modest” accuracy. And it’s not that complicated to independently verify whether text is created by AI or not — again, schools just don’t want to. They’d prefer not to bother with it. They absolutely prefer not to pay for tools that make them have to bother with it.
There is also this, citation removed:
Many UK university students are already regularly using GenAI tools but it is unclear whether they are using it to cheat.
I literally spit out my coffee. Come on. What are we doing here?
Sure, it’s possible that ChatGPT’s number one user demographic — millions of students — are all using generative AI tools ethically and in furtherance of learning. Let’s entertain that notion. Sure.
Do we not think we can concede, as the basis of a healthy and productive conversation, that at least some students are using generative AI to cheat? Don’t we absolutely know this already? Nothing is unclear about whether students are using AI to cheat.
The paper says that:
now students have access to a tool (GenAI) that can help them achieve a very high grade without doing any original work
And that this:
would seem to negate the basic validity of SOUR examinations as assessments and so undermine the credibility of any degree awarded where these examinations were used to certify learning.
But we’re not entirely clear whether students are using generative AI to cheat or not. I confess, I have no idea what we’re talking about here — why we are even bothering to type and share words about this.
All of that above, from the paper, is prelude. So, maybe I’m overreacting. The findings are the thing.
And here is the thing — question two, and its related responses:
Question 2. In 23/24, did the university use any remote invigilating or lockdown browser systems for any of these summative online remote examinations, e.g. ProctorU, Examinationsoft, Respondus etc?
Of the 93 [that] used online remote examinations, two refused to answer all further questions based on cost or commercial sensitivity while another responded that the data for this question were not available. For the remaining 90 universities, 67 (74.4%) did not use any proctoring while the remaining 23 (25.6%) used some form of proctoring or lockdown browser for some (though not necessarily all) examinations.
So, nearly three-quarters of higher education institutions in the UK say they are not securing their online assessments. Like, not even a lockdown browser. And the remaining 26% secure only some of their online assessments.
Seriously, I quit.
This is such gross and obvious negligence that my time is better spent yelling at traffic.
When asked what portion of their online exams were secured:
nine (10.1%) reported that they proctored all their online remote summative examinations while 61 (68.5%) proctored none of them
From the paper’s conclusion:
SOUR examinations show little validity as a form of assessment due to the ease with which students can cheat
You think?
I’m sharing this as well:
An additional challenge associated with the use of these assessments is the troubling ethical position which some students are placed in where often the policy for the security and integrity of SOUR examinations required that students should work under examination conditions, but there appeared to be no system for ensuring or detecting whether this has been done. E.g. where students are told not to communicate with others or to use unauthorised sources or tools. It was difficult for the authors here to identify a precedent in modern life for such an approach: where someone is put in a position of following rules which are not enforced, with no meaningful way of detecting whether those rules have been violated, where to violate those rules would be very easy and result in a significant gain for the individual
The authors also write that telling students not to use tools such as AI but not checking, not enforcing such rules may:
indicate to students that the institution does not take academic integrity seriously, which can diminish their respect for the academic regulations and the institution itself, particularly where it is obvious that other students can easily break these unenforced rules and so obtain a higher mark for themselves.
True. I’m exhausted, because I am convinced that these schools, these school administrators, know this. They do not care — because doing the right thing is hard and costs money.
More:
we are in a position where there is widespread use of an assessment format which appears to lack basic validity, and for which even peripheral benefits are questionable.
Whatever. Right now, today, millions of students are getting their degrees in formats which appear to lack basic validity because of cheating. No one cares.
Sorry, I know some people do care. It just does not seem as though the right people care — those who could fix it, in some cases by literally flipping a switch. But, no. Here we are. Here we have been. Here we are likely to remain.
Finally, the paper says:
It seems reasonable to propose that the validity concerns of SOUR examinations should lead to a regulatory requirement for them to be abandoned.
It should. It won’t.
More Are Implicated in “Writing Scandal” in Australia, But …
Front page coverage (subscription required) in the Sydney Morning Herald this week had this headline:
Number of schools caught by NAPLAN writing scandal grows
I love the big coverage that academic misconduct and test security get in Australia. And I wish other places were one tenth as good about making academic and credential integrity a public concern.
Still, I think this one is a bit overblown. Maybe a bit more than a bit.
For one, the Google machine tells me that the NAPLAN is the “annual assessment for students in Years 3, 5, 7 and 9.” In other words, we’re talking about third graders. Up to grade nine, fair. But still, relatively young scholars — which is to say that they’re probably not firing up Chegg and Quilbot with deceptive intent. Probably.
Moreover, the “writing scandal” involves, according to the paper:
that some students had completed exams with predictive text and spellcheck enabled.
Oh. That’s not great. But, again, it’s not like the students were asking ChatGPT for answers and washing them through a so-called humanizer to cover their tracks. It’s autocomplete and a spellchecker. I don’t love that. But still, pretty low grade stuff, in my view.
Also, according to the coverage, this scandal involves six schools where these students will have to take this exam again. Six. Once again, the Google box tells me that, “Almost 1.3 million students across more than 9,400 Australian schools and campuses took the [NAPLAN].” Six, of 9,400.
I’m just not sure the word “scandal” fits. Especially since it seems that, at the core of this, is that test authorities sent incorrect instructions to test administrators regarding how to disable writing tools on some computers. An error, yes. Fraud? In fifth grade? Maybe not so much.
Anyway, the paper also reports:
In recent years, users in online hacker forums have claimed they were able to bypass the restrictions of a lockdown browser.
Cybersecurity expert Troy Hunt said it was difficult to create a truly secure lockdown browser with software alone.
I think this is an important caution — that there is no perfect way to deliver a big test like this.
The paper also quotes Professor Cath Ellis of Western Sydney University and an academic integrity expert — which is great. I will never understand why American media outlets don’t bother to talk to someone who actually knows things about these things. Anyway, Ellis says:
“We have to ask the question: can we trust the judgment of these tests now we know the vulnerabilities of how they work?” she said.
“Where this predictive text problem starts to emerge is if the words are not the best ones the students have come up with but rather are what a large-scale language model suggests – then what are we really testing?”
I agree.
Using predictive text or spell-correction tools only verifies that the tools can pick the right word and spell correctly, polishing over some pretty clear dents in student learning or ability. That’s the deep-rooted problem with AI — it makes it impossible to measure what you actually know, what you can actually do. I know Grammarly can make a noun plural; that does not tell me anything. And I do wish education leaders took a harder line on such things.
Still, a scandal in Australia I am not sure this makes. I am sure they will clean it up.
VCU Seeks Associate Dean for Academic Integrity
Sent in by one of our readers — one of our amazing proofreaders, actually — Virginia Commonwealth University is seeking an Associate Dean and Director of Student Conduct and Academic Integrity.
According to the posting, the position is:
a senior leadership role in the Dean of Student Advocacy Office, responsible for leading and overseeing all functions and strategic planning for the university’s Student Conduct and Honor Systems.
Important as well as that:
Integral to this role is the regular reporting and communication of monthly data points directly to the Associate Vice President and Dean of Student Advocacy. Furthermore, when necessary, these data points will be shared with key campus stakeholders on a monthly basis, ensuring transparency, informed decision-making, and fostering a collaborative environment among university leadership and community members.
Cheers to VCU.
Nothing expresses an actual commitment to academic integrity as directly as having someone in charge of it. There is zero excuse for any school not to — zero. As has been said a billion times, your budget is your policy.
Also, I love that this position will be responsible for collecting and sharing data. I hope some of it makes its way to the public. Also said a billion times — you cannot solve a problem about which you do not know. Or something like that.
Anyway, cool job.
Creating assessments where cheating is impossible increases validity of the assessment but does not test or grow academic integrity. I can make my children clean their room by upping the consequences, but whether they do it or not without the threat of consequences is another matter— and presumably the goal. So long as the assumption is not that any opportunity that may induce cheating is actively unethical, I agree, and educators would do well to think about the culture their policies create.