Summer Survey: 90% of Students Say Their Peers Cheat on Exams
Plus, company launches AI text detection tool to stop "Aigiarism." Plus, maker of GRE, TOEFL to invest "tens of millions" in exam security. Plus, a rant on a recent NYT column, and unicorns.
Issue 175
To join the 2,749 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below:
If you enjoy “The Cheat Sheet,” please consider joining the nine amazing people who are chipping in a few bucks a month via Patreon. And thank you to those who are.
Summer Turnitin Survey - 90% of Students Report Exam Cheating
I admit, I missed this when it came out over the summer, but around June or July Turnitin released the results of a survey of 850 students and 200 faculty on academic integrity.
It says that:
90% of students report that their peers cheat on exams at least some of the time
Yup. On exams.
Students rated group projects as the least effective form of assessment and that, on multiple choice exams:
60% of students say their peers cheat extremely frequently or somewhat frequently on multiple choice exams
59% of faculty say students cheat extremely frequently or somewhat frequently on multiple choice exams
Further, I particularly liked this nugget from the survey - that more than two-thirds (67%) of students agreed or strongly agreed that:
The penalties for academic dishonesty at my institution impact the choices I make
If you assume that some students simply will not cheat - students in the survey estimated this non-cheating cohort at 10% - and that therefore the penalties are irrelevant to them, a result of two-thirds saying that penalties impact their choices seems worth underlining. It also reflects considerable research regarding misconduct decision making.
Of course, for penalties to have impact, they have to be enforced. Simply having them is pointless. But that’s a different point.
There are many data points to unpack in the Turnitin survey and I won’t get to them all. But here are two seemingly correlated, confusing and perhaps contradictory points.
One, when asked, “Do you agree or disagree with this statement: ‘Students at my school follow the academic integrity policy’”?
67% of students agreed
81% of teachers agreed
The 67% number seems at odds with the initial finding that 90% of students said their peers cheated. Unless, of course, they don’t view the cheating as related to or in violation of the integrity policy. But the 14-point gap between what teachers and students think - 81% say integrity policies are followed versus 67% - feels noteworthy.
It’s all pretty confusing when the survey also found that, by a slim margin, it’s teachers who think students are more likely to cheat more often. When asked, “How frequently do you believe any of your peers cheat on exams?”
26% of students said “often” or “very often”
29% of instructors said “often” or “very often”
That’s odd to me.
But if nearly 30% of teachers believe cheating is going on often or very often, why this isn’t a crisis requiring immediate action, I have no idea. I’m more inclined to believe the preceding finding - that 81% of faculty think students are following the school’s academic integrity policies. That would explain the lack of urgency, at least to me.
Anyway, give the survey a look. It’s always good to have more data on what’s going on.
Company Launches AI-generated Text Detection
Seeing the growing number of companies claiming they can detect text written by AI such as ChatGPT (see Issue 174), a company called CrossPlag wrote in saying they can as well.
Their press release says:
The rise of AI-generated content poses a real threat to academic integrity, an area where Crossplag has passionately worked for more than six years. Besides offering translation plagiarism checking, Crossplag felt the need for a tool that allows educators to know whether the content is genuine or whether AI produced it.
Aigiarism – a word derived from a combination of “AI” and “Plagiarism” – is starting to become a standard to which most people are not paying attention, so a tool that offers a counter-measure for it becomes a necessity – hence, the AI Content Detector.
I love the word “Aigiarism.”
There’s a link to their tool in the release, if you’re interested. The company has also written a blog post and made a video demo.
By my count, that’s at least four companies who now say they can detect ChatGPT text or will be able to soon. That’s good.
Maker of GRE, TOEFL to Invest “Tens of Millions” in Exam Security
This one may have flown under your radar but the Associate Vice-President, Global Higher Education at ETS - the maker of the GRE and TOEFL and other exams - gave an interview in which he said the company was making major investments in exam security.
It’s likely no coincidence that the interview was given in India, which has a pervasive, organized crime issue with academic fraud (see Issue 137 and many others). Citing the “promotion of cheating” in places in India, the ETS spokesperson said:
So what ETS did was, it invested in people, because we wanted more experts who can securely administer the tests. And then, we hired a lot of security professionals. We also have been investing more heavily in technology controls to mitigate the risk of cheating. We brought in Artificial Intelligence, environment scans, face detection, ID verifications, biometrics, and so on. We have invested tens of millions of dollars in security, not just in technology but also in people.
AI, environment scans, face detection, ID checks, biometrics and trained human proctors going up to the tune of “tens of millions of dollars” to combat fraud.
About which I say again - test security is the growing global reality. Schools or faculty or others who flatly oppose test security measures such as these are living in Fantasyland.
On Plato, ChatGPT, the NYT and Pretty Unicorns
Speaking of Fantasyland, an essay landed in the New York Times last week on education, ChatGPT and, more or less, cheating. It’s by opinion columnist Zeynep Tufekci who, in addition to writing for the NYT, must live on a unicorn ranch and spend her off hours with leprechauns.
Don’t get me wrong, it’s an interesting read. Tufekci begins with Plato and his resistance to learning tools such as the alphabet. And she correctly pinpoints that ChatGPT just makes stuff up - “a high-quality intellectual snow job,” she calls it.
She then shares the virtues of the “flipped classroom” in which assigned, graded work is done in class and the background material - hearing lectures, the reading, the practice - is done outside it. And how ChatGPT could be part of that, mistakes and all.
That’s fine. And she continues about the necessity of refining arguments and for receiving quality teacher feedback. Good. Sure. Plato would, I have no doubt, agree.
But then, and for no discernable reason, she goes all unicorn about the realities of academic assessments:
Some school officials may treat this as a problem of merely plagiarism detection and expand the use of draconian surveillance systems. During the pandemic, many students were forced to take tests or write essays under the gaze of an automated eye-tracking system or on a locked-down computer to prevent cheating.
In a fruitless arms race against conversational A.I., automated plagiarism software may become supercharged, making school more punitive for monitored students. Worse, such systems will inevitably produce some false accusations, which damage trust and may even stymie the prospects of promising students.
Educational approaches that treat students like enemies may teach students to hate or subvert the controls. That’s not a recipe for human betterment.
What? Where to begin?
The idea that we can or should toss out remote security provisions and do all graded assessment in a classroom under the watchful eye of teachers is pure bufoonery. Sounds nice. Not possible. Go ahead, try that in an intro to chemistry class with 700 students, I dare you. Or try it in the heavily online schools that dominate today’s learning landscape, where there literally is no classroom to flip. What then?
Yes, some students had to take exams with remote proctors and lock-down browsers because - and hear me out on this - cheating was rampant. It still is. But her argument is, I guess, that it’s better to close our eyes and have the cheating. Cool.
And false accusations? I say again - what? She clearly has no idea how these tools work, especially the plagiarism diagnostic ones. And, to point out the absurd and obvious, if the rules and tools of integrity enforcement just “teach students to hate or subvert the controls,” we have a really serious problem. If the motivation to cheat is so great that efforts to stop cheating only motivate subversion - oh my goodness.
I am left to assume that her “recipe for human betterment” is to turn off the security - because that always works. Turn off the convenience store security cameras, take the bag scanners out of airports. Let’s take down the bulletproof shields in banks and remove the Plexiglass in taxicabs. Abolish the SEC. We would not want to treat people like the enemy when all we’re really teaching people is to hate or subvert the controls.
Seriously, I am sure her unicorns are lovely.
Normally, I would not care so much that a New York Times columnist lives in utopia - using ChatGPT to advocate for Plato-like teaching in a cheating-heavy reality. But it set me off when, just a few paragraphs after blasting test security and fraud detection tools she wrote:
Already, Stack Overflow, a widely used website where programmers ask one another coding-related questions, banned ChatGPT answers because too many of them were hard-to-spot nonsense.
Really?
Programmers banned ChatGPT you say?
That’s interesting. Teachers - don’t do that. Change how you teach. Programmers - go right ahead and ban that stuff, it’s nonsense.
Sorry for the rant. This one annoyed me.