17% of America's Brightest Science Scholars Admit to Cheating, Though It's Probably Closer to One in Three
Plus, a must-read from Tulane University. Plus, KQED offers helpful reminders about some of the drawbacks of ChatGPT.
Issue 212
To join the 3,297 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 13 amazing people who are chipping in a few bucks a month via Patreon. Or joining the 13 outstanding citizens who are now paid subscribers. Thank you!
Awarded and Supported Science Grad Students Self-Report High Cheating Rates
The National Science Foundation is a public-supported organization with a mission to further “fundamental research and education” in non-medical sciences. It gives research grants for graduate study called the Graduate Research Fellowship Program, which, in addition to the prestige, includes a financial package of more than $100,000 over three years.
Spotted by an attentive reader, there’s new reporting and survey research that should stun you and surprise no one - of the 244 people in the fellowship program, a significant number admit to cheating.
Reported by the American Physics Society with the research in the journal Nature, the high level numbers are:
The self-reported rate of academic cheating was 16.7% and of research misconduct was 3.7%. Thirty-one percent of fellows reported direct knowledge of graduate peers cheating, and 11.9% had knowledge of research misconduct by colleagues. Only 30.7% said they would report suspected misconduct.
Here’s where I remind everyone that self-reported rates of misconduct are known to be under-counted. Rule of thumb, I double it. Doing so makes it pretty close to the 31% who said they had “direct knowledge” of cheating.
This type of “I know people who do it” question has become a more reliable gauge of misconduct than simple self-reporting. As such, the actual rate of cheating among this prestigious group is likely in the 30% to one-third range.
But even if it’s only 17%, this is a problem. These are presumably the best and brightest science researchers in the country - publicly funded to advance engineering science. The mission of the NSF Research Fellowship, according to the website, is:
to ensure the quality, vitality, and diversity of the scientific and engineering workforce of the United States.
Add to that what’s already widely known, that cheating in engineering and physics courses is prolific across all academic levels. So, yeah - the quality of America’s scientific and engineering workforce.
And what will be done about it? The same thing that’s always done about cheating - absolutely nothing.
Must-Read: ChatGPT is a Plagiarism Machine
This is a must-read opinion piece in The Chronicle of Higher Ed from Joseph M. Keegin, a doctoral student at Tulane University.
It’s in The Chronicle, which continues to do great and important work in opening its pages to existential dangers of academic misconduct. It remains a stark and beneficial contrast to other publications that lay claim to higher education coverage.
And the piece is fantastic.
Let’s start with the headlines:
ChatGPT is a Plagiarism Machine.
So, Why do Administrators Have Their Heads in the Sand?
It is. And, I have no idea. But they do.
I wish I could share all of it here, it’s that good. Instead, I’ll do my best to cherry-pick a few of the best parts and pass those along. Seriously though, go read it.
Among many fine points, and after counting the clear and present danger that AI-created text presents for integrity and to teaching, Keegan says:
Faced with these challenges and frustrations, however, college administrations have largely remained silent on the issue, leaving teaching staff to fend for themselves.
“The silence is howling,” he continues. Adding:
silence yet reigns on a concrete issue of great importance to the day-to-day functioning of educational institutions and the work of students and faculty alike.
Then, it gets really good as Keegan writes:
On many campuses, high-course-load contingent faculty and graduate students bear much of the responsibility for the kinds of large-enrollment, introductory-level, general-education courses where cheating is rampant. According to the International Center for Academic Integrity, more than 60 percent of college students admit to participating in some kind of cheating. Add to this already-dismal situation the most easily accessible and lowest-cost cheating technology ever devised and watch the entire system of college education strain at its rivets. How can large or even mid-sized colleges withstand the flood of nonsense quasi-plagiarism when academic-integrity first responders are so overburdened and undercompensated?
Preach, Mr. Keegan, preach.
And he does:
A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?
I disagree with not a single syllable.
Keegan offers suggestions for administrators, which are pretty solid and include:
Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on.
He adds further:
meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.
Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows.
I just don’t see any reasonable or meaningful way to have even a dialogue about the new realities of AI text creation without this. It’s not sufficient, but it is required.
Feeling as though I have already cribbed too much, I note that the final stanza of Keegan’s piece is - much like those above - worthy of special attention.
My Two Points
Not wanting to detract from the piece, I’m adding two related points which I think need airing.
As Keegan points out, AI-text tools can be teaching tools. No one is arguing otherwise. They are also the largest cheating tools ever created. Total bans of ChatGPT and its cousins are foolish. Simply not addressing its capacity and already wide use in cheating is more so. Schools are going to have to equip and educate their communities about appropriate uses, detection capabilities and reporting requirements, as well as draw and enforce hard lines on misuse.
To which, Keegan’s primary point - so far, in quintessential American academic fashion, schools have done zilch to even acknowledge, let alone address this issue.
By and large, administrative, oversight, and regulatory authorities have practiced blind ignorance on American academic misconduct for years. The high frequency of misconduct rates in online programs has been skipped over. The multi-fold escalation in cheating during the pandemic, ignored. Chegg and Course Hero, nothing. Now, with the specter of the world’s largest free, universally available cheating tool already causing us to “watch the entire system of college education strain at its rivets,” our education leaders are nothing if not consistent.
KQED on - What Else? - Teaching with ChatGPT
Okay, so this article from KQED is from February. I said I was behind.
I’m sharing it in large part because it’s from KQED, which had the single worst - and blatantly wrong - coverage of academic integrity I’ve seen in a long, long time (see Issue 138). It was so bad, I named it the worst piece of academic integrity reporting in 2022 (see Issue NY 22/23).
In contrast, this piece from February is better. Though it’s pretty hard to fathom anything being worse.
Either way, it correctly starts with some basic and accurate premises:
In his university teaching days, Mark Schneider watched as his students’ research sources moved from the library to Wikipedia to Google. With greater access to online information, cheating and plagiarism became easier. So Schneider, who taught at State University of New York, Stony Brook for 30 years, crafted essay prompts in ways that he hoped would deter copy-paste responses. Even then, he once received a student essay with a bill from a paper-writing company stapled to the back.
Teachers probably spend more time than they’d like trying to thwart students who are able to cheat in creative ways. And many educators are alarmed that ChatGPT, a new and widely available artificial intelligence (AI) model developed by OpenAI, offers yet another way for students to sidestep assignments.
Yup - students cheat. Only the tactics have evolved. And ChatGPT is a new and powerful cheating tactic. Educators are right to be “alarmed.”
Later, Schneider uses the flat and inane comparison of ChatGPT and the calculator, but we can overlook that. Schneider and KQED do correctly remind everyone that:
ChatGPT produces essays that are grammatically correct and free of spelling errors in a matter of seconds; however, its information isn’t always factual. ChatGPT provides answers that draw from webpages that may be biased, outdated or incorrect. Schneider described ChatGPT’s output as “semi reliable.”
They also sound a noble note when they remind:
when students use ChatGPT they may be putting their data at risk.
According to Open AI’s privacy policy, inputs – including ones with personal information, such as names, addresses, phone numbers or other sensitive content – may be reviewed and shared with third parties.
Further:
Schneider acknowledged that if ChatGPT will be used to support teaching and learning, privacy is a major concern
The piece is largely benign and marginally helpful in providing context for if, when and how AI-generative text bots can be used in teaching and learning.