5,000 Students Accused of Plagiarism At University of South Africa
Plus, Dr. Bertram Gallant writes on integrity in an AI era. Plus, cheating at Queens University in Canada.
Issue 283
Subscribe below to join 3,789 other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 17 amazing people who are chipping in a few bucks via Patreon. Or joining the 36 outstanding citizens who are now paid subscribers. Thank you!
Thousands Accused of Cheating in Online Courses and Exams at University of South Africa
While there is not a ton of detail about what is going on, several news outlets have reported that thousands of students have been cited for potential academic misconduct at the University of South Africa (UNISA).
By thousands, the press report says 5,000. And that the students were flagged for plagiarizing in the school’s online programs, of course. Students blamed the detection system, of course.
This is from the coverage:
UNISA SRC President, Nkosinathi Mabilane, “We experienced a high number of these flags from the examination period of October -November 2023. The students who have flagged about 5 000 of them, but even in this 5 000, the investing management then came to the concession that look, first time offenders us will be given a re -opportunity to rewrite. So, there are students that were pardoned who were first time offenders. And there’s about 2 000 or less students that will repeat offenders and the university had to take them through the disciplinary processes of the university.”
If I am reading that right, about 3,000 students will be “pardoned” and 2,000 or so students had been flagged for some kind of misconduct before. Two thousand.
Unfortunately, I don’t have more information.
And though 2,000 and 5,000 are shocking numbers, good for UNISA for discussing this in public and for addressing the problem, even if two-thirds are essentially excused.
The latest sets of incidents at UNISA are noteworthy as well in the wake of this article from January in which students complained that school policies and practices around academic integrity sanctions were “unfair.” Specifically:
The leadership of Unisa’s Johannesburg SRC branch says suspensions for cheating and dishonesty are unfair to students and the university management should review its policies.
In these cases, students blamed delays in their cases, which is fair. But they also took issue with “harsh suspensions” for misconduct instead of “constructive approaches that foster growth over strict punitive measures.” In other words, they thought the penalties for cheating were too heavy.
The paper from January also cites a student who says:
there were about 4,000 students whose disciplinary hearings were pending.
That was, I assume, before this recent round of 5,000 accusation incidents. My good friends at Google say the enrollment at UNISA is 32,000.
TBG: How We Maintain Academic Integrity in the ChatGPT Era
Noted academic integrity leader Dr. Tricia Bertram Gallant penned an article recently in Liberal Education, the publication of the American Association of Colleges and Universities.
Like most things she writes, it’s worth a review.
There are a few bits I especially like, such as where she presents a typical post-AI education argument:
I’ve seen this misunderstanding play out on social media. One college faculty member asks how other instructors are limiting cheating opportunities, and someone responds, “You just need to accept that GenAI is here to stay, and you need to change the way you teach.” This is unhelpful advice.
True.
Teachers can always examine and improve how they teach. They can also, in nearly every case, do more to secure the accuracy and fairness of their assessments. As the article lays out, the answer is not either/or, it’s all of the above.
Bertram Gallant also gives practical advice for educators, including:
Reduce cheating temptations and opportunities. For assignments and unproctored tests, make it harder for students to find GenAI tools useful. Ask students to create, analyze, or evaluate, rather than simply remember or understand (see Bloom’s Taxonomy), and require students to orally explain their knowledge. Proctor at least one assessment so you can be more assured that the student has the knowledge being evaluated.
Requiring oral explanations and proctoring at least some assessments is good advice — in my view.
Queens University Newspaper Talks With Students Who Used AI to Cheat
The paper at Queens University (CA) interviewed four students who they say used generative AI in violation of the school’s policies or their teacher’s directives. Or, as they wrote it:
each of whom used ChatGPT in ways that departed from academic integrity.
That’s well parsed.
One student explained she uses ChatGPT to do most of the writing on her assignments, though she comes up with the ideas. She also notices that some of her assignments ask her to do what the AI does, writing summary memos for example. The story related to this student continues:
Among her peers in Commerce, Amy has noticed that while not everyone is using ChatGPT, its usage has ticked upward recently. She often hears comments like “just ChatGPT it” from her peers.
I buy that.
But this, from “Amy” really spun my head:
She believes ChatGPT should play a role in the future of academics because she thinks the current honour system in many of her courses penalizes students who follow the rules.
At the same time, Amy acknowledged normalized use of ChatGPT in academics could have an unintended effect: unreasonably raising the standards for writing assignments.
If I am reading that right, this student believes cheating is common enough and unpunished enough via the “honour system” that students who do not cheat are penalized. I believe everything about that. It’s a sentiment you hear quite often. And more evidence that Honor Codes don’t work.
As much as I accept that, the next bit is stunning. Our student thinks that if too many people use generative AI it will raise the standards of writing? Raise? Really? What kind of writing is the standard now, I have to ask. I’m not trying to be a jerk. I am genuinely worried. If ChatGPT is writing better than your college graduates, I have so many more questions.
Another student:
has deferred to ChatGPT for written assignments. When it comes to getting essays for his philosophy courses submitted on time, Timothy*, ArtSci ’24, has found ChatGPT to be an essential resource.
Another said:
he must often complete weekly reports that provide updates on a semester-long project.
“It’s a lot easier to ask tools like ChatGPT to spew out some text and you might tailor it a bit [for these weekly reports],” he said.
So much for the idea that having students do a semester-long project would reduce misconduct.
The same student also shared that:
“In the fall term, in our data structures course, there was a cheating scandal involved with ChatGPT. Fortunately, I wasn’t involved, but it involved a lot of people taking code from each other but also using ChatGPT to generate code. ChatGPT outputs really similar code for most people.”
You don’t say.
The fourth student said that:
when it comes to his academics, James told The Journal he has seldom used it in an ethical manner.
When he was working on an assignment about a specific drug, James used ChatGPT to find research articles related to that drug. He proceeded to use ChatGPT to also write the actual assignment.
Oh.
That same student, James, said that:
he believes his writing skills have stagnated and even worsened due to his dependency on ChatGPT.
“On a personal level, I think [dependency on ChatGPT] can lead to regression in the sort of skills you’re meant to be developing during an undergraduate degree,” he said in an interview with The Journal.
James continued:
He pointed out how some of his professors have claimed they’re using AI detection software.
“None of the AI detection softwares are reliable and there are issues with using a third-party software that’s not approved by the University and exposing students’ copyrighted assignments to that third-party software without their consent.”
Sorry, James. No. The guy who has GPT write his assignments is worried about AI detection — on copyright grounds. Sure.
Then, for no particular reason, the paper itself adds:
Research has shown that AI detection softwares are often inconsistent and produce many false positives, meaning that human-written text is frequently flagged as AI-generated by these softwares.
Again, no. Research has not shown that.
But anyway, good article. Yes, students are using AI to cheat. It’s not really clear how much anyone cares.
Class Notes
It’s possible, as you may know from the link up top, to become a paid subscriber to The Cheat Sheet. I think it’s $8 a month, which was the default. There’s no difference between the paid subscription and the free one — I think of paid subscriptions as generous votes of support rather than paying for an upgraded experience.
When someone joins as a paid subscriber they can leave a note, which is a nice feature. The notes have been flattering and I am thankful for them. Some time ago, I made a note to start sharing the notes because I think they’re awesome and make me feel great.
Last week, we picked up a new paid subscriber — JS. In the notice, JS wrote:
"Bold work on what has surprisingly become a controversial topic in education!"
Thank you, JS. I’m honored to have your support and encouragement.