University Changes Policy After Catching Students Cheating with ChatGPT
Plus, Today Show covers GPT and cheating. Plus, Doctor GPT will see you now. Plus, Quick Bites.
Issue 179
To join the 2,840 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below:
If you enjoy “The Cheat Sheet,” please consider joining the 11 amazing people who are chipping in a few bucks a month via Patreon. And to those who are, thank you!
Yeshiva University Catches Students Using ChatGPT on Final, Updates Integrity Policy
According to its student newspaper, Yeshiva University, a modern orthodox Jewish university in New York City, caught students - plural - using ChatGPT on a final exam last month and, in response, has updated its academic integrity policy.
First, the article itself is great. Hard to believe it was written by a student. To which I’ll add, thank goodness for student newspapers as they continue to be our primary source of information about academic integrity.
As reported, a faculty member at Yeshiva used AI-detection software and caught students using ChatGPT on a take-home final exam which counted for 30% of the class grade. The school would not say exactly how many students were caught.
As an aside - a take-home test is never a good idea, cheating-wise.
There are a few significant things to share.
One and two are the obvious, that students are using ChatGPT to cheat. Like, already. And that teachers and schools are already using software to spot it. We knew about the first part (see Issue 176), but the second part is new. I asked the student journalist if they could tell me which software the school used. I’ll let you know if I know.
After catching the students, the school changed its academic integrity policy. As reported:
In response to the cheating, Yeshiva University updated its undergraduate academic integrity policy to state that intentional misrepresentation is characterized by using "someone/something else's language," updating it from "someone else's language.” The updated policy also added the word “generator” to its list of examples of intentional misrepresentation.
I left in the link, in case you want to review it.
Every school should do this immediately. Don’t wait to catch it. If a school is going to consider AI-generated content a violation of academic standards, codify it now. Why leave open the door to, “it’s not really banned, so…”
Nonetheless, school officials said:
“The new language makes clear that the use of AI platforms, without acknowledging the source of this content, is a violation of our Academic Integrity Standards”
Good. Line drawn. Clearly communicated.
It’s also noteworthy that Yeshiva seems to actually mean what they say when it comes to integrity. From the reporting, quoting a school Dean, Noam Wasserman:
“Bad decisions in recent semesters,” Wasserman wrote, “have led the YU-wide Academic Integrity Committee to expel [emphasis Wasserman’s] multiple students from the university, in addition to students whose violations resulted in suspensions, course failings, and other repercussions that can remain for a lifetime.”
And - almost finally - I simply love this quote from Dean Wasserman:
if you get a job based on fraudulent grades, every dollar you make is geneiva [theft]. Think about that: Every paycheck is filled with aveiros [sins].
If you get a job with fraudulent grades, every dollar you make is theft. Damn, Dean.
And finally, the article ends that the professor who caught the cheating in the take-home exam:
told students in her class that she no longer plans to administer take-home exams.
Today Show Segment on ChatGPT, Cheating
In 2021, The Today Show was one of many national news outlets to cover the explosion in cheating that accompanied online learning (see Issue 15). Now, Today is out with a new segment on - what else? - ChatGPT and cheating.
The video segment is good, but separate from the written article.
The written article has the familiar set-up:
Some educators worry that students will use ChatGPT to get away with cheating more easily — especially when it comes to the five-paragraph essays assigned in middle and high school and the formulaic papers assigned in college courses. Compared with traditional cheating in which information is plagiarized by being copied directly or pasted together from other work, ChatGPT pulls content from all corners of the internet to form brand new answers that aren't derived from one specific source, or even cited.
True. Though, as we’ve seen, it’s not “will use,” it’s are using. But still, the most newsworthy part of the story is this quote, from OpenAI, the maker of ChatGPT:
We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system
It’s pretty clear that in the “developing mitigations,” they mean watermarking - putting cues or tells in ChatGPT responses that other systems, perhaps even their own systems, could easily spot. This has been a rumor since ChatGPT arrived, that, too prevent fraud and misuse, OpenAI would put watermarks in the answers. It’s not clear they will, though it is clear that, as of now, they have not.
Dr. GPT Will See You Now
A research paper out this week has found that the AI text generator ChatGPT can answer questions from the United States Medical Licensing Examination (USMLE) with accuracy “comfortably within the passing range.”
In other words, ChatGPT can pass medical exams.
For context, the paper says the USMLE is a series of three tests taken during and after medical school and it:
is a high-stakes, comprehensive three-step standardized testing program covering all topics in physicians’ fund of knowledge, spanning basic science, clinical reasoning, medical management, and bioethics.
The findings are that:
ChatGPT performed at >50% accuracy across all examinations, exceeding 60% in most analyses. The USMLE pass threshold, while varying by year, is approximately 60%. Therefore, ChatGPT is now comfortably within the passing range
So that happened.
With AI being able to pass medical exams - and with this being The Cheat Sheet and all - the mind races to exam security and whether using ChatGPT to get medical licenses is a thing we need to worry about now. And it turns out that based on what’s written about it, security around these exams is pretty tight.
For example, tests are given only at proctored test centers where test-takers provide ID, get scanned or wanded by metal detectors and have their fingerprints scanned. Eyeglasses and hair accessories are subject to scrutiny and test-takers are given laminated papers and markers for notes and calculations. Those are returned and erased. And more. Pretty tight.
That may be why the research paper linked above doesn’t mention ChatGPT as a cheating threat, but considered it as a probable teaching support.
Meanwhile, as I’ve mentioned before, some higher education institutions seem eager to move away from even the most basic exam security provisions, leaving their students fantastically unprepared to deal with the career exams and real security provisions that await their graduates.
Quick Bites
This TV news segment from Jamaica covers an alleged cheating incident and investigation at one of the country’s academies. The reporting says that, after the school rose in test score rankings, news started to circulate that a teacher had helped students cheat.
News from Zimbabwe that a student initially sentenced to “seven months in prison” for obtaining a test early, has instead been sentenced to 105 hours of community service. It was a first offense.
A student at Princeton says he’s developed code that can detect ChatGPT. He says he did it over the weekend. I don’t know, maybe he did. If so, he’s not the first (see Issue 175) and won’t be the last - AI writing detection tools are coming or already here.
In Kuwait, the education ministry has barred 834 12th grade students from their exams, for cheating. That’s 834 in two days. Headphones with two-way communication, the government said, was the main method of attempted cheating.