Cheating on the Rise in UK, Still
Plus, cheating at Washington & Lee Law School. Plus, cheating with AI and using AI to spot it, K-12 coverage in Washington State.
Issue 297
Subscribe below to join 3,859 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 18 amazing people who are chipping in a few bucks via Patreon. Or joining the 37 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month. Thank you!
Cheating on U.K. National Exams Continues to Grow, Up 18% Over Last Year, Up 54% Over 2019
More reporting is out this week on the ongoing increase in cheating on some of the U.K.’s standard school competency exams — the GCSE and A-Level exams.
According to the coverage:
The number of students cheating in GCSE and A-Level exams has risen by nearly a fifth in the last year as education experts fear pupils are using ChatGPT.
In 2023, the recorded cases of wrongdoing in exams rose by a staggering 18 per cent and it was 54 per cent than the number caught in 2019 - the last pre-Covid exams.
The reporting does say that the most common form of attempted cheating is using old fashioned cheat sheets, though:
the latest report into exam cheating shows students are turning to AI in a bid to cut corners and get better marks for their coursework.
Honestly, we may have reported this before, in Issue 275. I am not sure if these reports of increased cheating cover the same data or not. Though our coverage of cheating rates on these tests goes back to Issue 174, in 2022, where the reported increase in misconduct was up 43%. Either way, it’s clear there is a problem.
For details, the paper said:
There were 450 cases where students smuggled study notes into the exam room and 2,180 incidents relating to candidates caught with banned mobile phones or other electronic devices.
The reporting also cites cases of plagiarism, test impersonation, and question theft.
Further:
Christopher McGovern, Chairman of the Campaign for Real Education, said: 'AI is undermining the integrity of assessment exams.
'The best solution is to ditch coursework altogether. All that pupils need for most exams is a pen, paper and a question paper. It is the fool-proof solution.'
Impracticality aside, that’s not fool-proof. But it does strictly limit some forms of misconduct.
The takeaway is that — yet again — we see test misconduct on the increase, driven largely by new technologies and easy access to shortcuts. And as elsewhere, these challenges can be solved. The prerequisite question is how much anyone wants to.
Cheating at Washington & Lee Law School
A website called Above the Law, has a smart and otherwise unreported piece about cheating among first year law students at Washington & Lee University Law School (Virginia).
The short version is that two professors used the same exam for different sections but one of the professors told their students a few of the questions in advance. That’s unfair, sure. But the cheating happened when some of the students with advance questions — surprising absolutely no one — decided to share them. The article says some students “suspect deliberate question sharing.”
Nothing about this is remarkable except that, Above the Law says, the school has taken an odd stance on the matter. In response to suspected answer sharing and resulting disadvantages, students suggested the impacted tests be moved to pass/fail format.
But that’s not what’s happening at the law school. Tipsters shared multiple emails from administrators (you can read them below), where the legitimate concerns about the test are being brushed off as “rumors” and “innuendo” or “chatter.” Instead of a solution for students, the law school is leaning hard on the school’s honor code and just assuming everything is kosher unless someone comes forward with first-hand knowledge of an honor code violation. (How you’d have this knowledge without telling on yourself is anyone’s guess.)
The parenthetical about the e-mails is in the story. I’m not sharing them here, but if you wish to read them, jump on over with the link up top.
Anyway, yeah. The only thing less surprising than students sharing test questions is that a school is refusing to even acknowledge it.
As I say all the time, you can’t do anything about cheating if you don’t know about cheating. Or, as may be the case here, you can’t do anything about cheating unless someone steps forward to admit they cheated.
Typical.
Teachers in Washington State Share Cheating with AI and Spotting It
The Everett Washington Herald had a piece out several weeks ago on K-12 teachers and students using AI. It’s solid and well worth a review.
Here is the start of it:
EVERETT — Several times in the past year, Snohomish High School teacher Kathy Purviance-Snow has suspected students of using artificial intelligence to do their homework.
Her district uses Turnitin, a software that detects what percent of an assignment has been plagiarized or made with generative artificial intelligence. Purviance-Snow, a civics and business teacher, gets suspicious at anything above 25%. When the score is above 50%, she strongly suspects AI misuse.
Or as older generations might call it: cheating.
According to the Everett Herald, I am in the older generation.
What I like here is that the AI detector is informing an analysis and decision, not making it — a distinction so many steadfastly refuse to acknowledge. This teacher set her own thresholds for suspicion. Good for her.
I like this teacher. She continues:
Purviance-Snow notes the rise in cheating was accompanied by the sudden rise of remote learning during the pandemic.
“Kids have lost a little bit of that grit, a little bit of that desire to actually learn, they just want to get done,” she said. “They just want to turn something in.”
And:
Purviance-Snow trusts Turnitin as a tool to analyze her student’s work. Examples of questions she asks when grading include, “Are they using words outside of their lexicon?” or “Does the text’s voice match previous assignments from the student?”
When she calls out a student, she gives them the chance to own up to their decision and redo the assignment.
“I try to err on the side of grace because kids make mistakes, people make mistakes,” she said.
Will you look at that? A teacher using different sources of information, including AI detection, to make an informed decision. I know it will shock people that teachers can make decisions with their own brains, without just doing what an AI detector tells them. And erring on the side of grace. Love it.
But the article also quotes Chris Bailey, a director of technology at a local school district and Bailey says:
detection softwares like Turnitin are not reliable.
“One of the challenges with those technologies is that they’re leveraging artificial intelligence. It’s sort of AI competing with AI,” he said. “We’re on the lookout for a tool that we feel is safe and appropriate to use.”
By now, you have to know the research on this. The top two or three AI detection systems have repeatedly shown accuracy and reliability rates of 95% or better.
The Herald also reports:
Everett Community College doesn’t use any detection software.
When I read stuff like this, it makes me crazy. Actively, consciously not even checking for misconduct is one level of absurd. Telling everyone you’re not doing it is just — something else entirely.
Imagine, for context, a bank telling a newspaper “we don’t use security cameras.” Or, “we don’t have guards in our banks.” Come on. If you’re going to go without even basic security and deterrence, don’t advertise it.
Regarding Everett Community College, the story continues:
At Everett Community College, some faculty members have asked the college to buy detection software.
Hannah Lovett, the school’s director of educational technology, said the college fears false positives and creating a climate of constant suspicion.
History professor Derek Nelson is also concerned about undue suspicion toward underserved students.
I bite my tongue.
Scanning a paper for AI does not convey suspicion any more than watching students take an in-person, paper-and-pencil test. And no one ever seems to consider the message sent by not checking — that honest work is not valued or protected, that there is no point in actually doing the work. As a friend of mine used to say, we respect what we inspect. And if you’re not even looking, the message is clear.
I guess I did not bite my tongue.
Well, I did a little.
The story is good — go check it out.
But before I leave it, I will share two more bits.
Purviance-Snow has also heard of a local student whose work was flagged for AI use. The student acknowledged using Grammarly, a software that corrects punctuation, grammar and improves readability. She thinks that should be allowed.
I don’t have a dog in the fight as to whether it should be allowed. I yield to teacher discretion. But it probably should, we may all agree, be cited. More importantly, it’s second-hand, but it’s another example of Grammarly setting of AI detectors (see Issue 263).
And finally:
In November, Avery Colinas, a student journalist at Cavelero Mid High School in Lake Stevens wrote an article about AI’s effect on education.
In an interview, the ninth grader said artificial intelligence makes some assignments feel like busy work.
“Why are we learning to do it if we can get it done in seconds?” she asked.
She added: “I also feel like it’s making humans almost, like, slowly becoming less necessary.”
In essay-based classes, plagiarism detectors are an effective deterrent, she said.
Her English teacher asks students to put away their phones and verifies document change history for copy-pastes. A software called Google Classroom can also detect ChatGPT.
Plenty to unpack there. Plenty.
I feel like we are so much closer to the dawn of these issues than we are to their resolutions.
Anyway, good reporting from the Everett Herald.