343: Two Pieces in Tampa Bay Times on AI and Integrity
Plus, deadline looms for ICAI Conference. Plus, a correction.
Issue 343
Subscribe below to join 4,344 (+7) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
If you enjoy “The Cheat Sheet,” please consider joining the 16 amazing people who are chipping in a few bucks via Patreon. Or joining the 47 outstanding citizens who are now paid subscribers. Paid subscriptions start at $8 a month or $80 a year, and corporate or institutional subscriptions are $240 a year. Thank you!
Dueling Opinion Pieces at Tampa Bay Times
A few weeks ago, the Tampa Bay Times (formerly The St. Petersburg Times) ran a pair of essays on AI and academic misconduct. Well, one was first, while the second was a response.
The first opinion piece (subscription required) is from an alternative universe of AI hype and marketing, which requires discounting or dismissing academic integrity. It should, in my view, never have been published. The second aims to correct the record.
An aside, with context
But before I get to them, a moment of personal privilege. I grew up in the Tampa Bay area and came to understand and appreciate the value of our local paper, The St. Petersburg Times. It was among the best in the nation regularly. It won something like 14 Pulitzer Prizes. And it hurts to watch good things — the things you looked up to in earlier years — die. Or fade away.
In 2022, the paper was one of several that sold its name and masthead to essay mills, letting them shamelessly advertise their services on the pages of a Pulitzer Prize winning paper (see Issue 144).
At the time, I e-mailed the paper to let them know that no one was guarding the door on their “sponsored posts,” and that they were allowing people to sell illicit, inappropriate, possibly illegal, services on their pages. For this article, I checked again and the essay mill ad is still up there. Not only did they not care enough to notice it, they obviously don’t care enough to take it down either.
That’s a choice. Integrity had a price, and the essay mill companies paid it.
The first one
With that backdrop, we got the aforementioned, first byline opinion piece in the Tampa Bay Times on February 7. It’s from Bruce Fraser of Indian River State College and Sid Dobrin of the University of Florida. The piece sports the headline:
Is student use of generative AI cheating? The answer is complicated
Well, sure. But also — no. It’s not complicated at all. Those who are enthralled with AI chatbots want it to be complicated, but it’s not.
The article starts with:
The rise of generative AI has sparked widespread ethical concerns among educators, with many fearing a surge in student cheating.
From the jump, we get one of my all time trigger pet peeves because we have to center this topic on concerns and fears — not on ethics or integrity. If you can keep the conversation about fears, you can avoid talking about the problem. It’s not that I’m stealing from you, let’s discuss your fears about people stealing from you. It’s a major tell that we’re about to take a journey of very convenient rhetoric.
The authors say that banning AI use in classrooms and using AI detection technology:
is reactionary and overlooks a more pressing question: Should our traditional definition of academic integrity — the idea of fair play in the context of learning — still hold sway in the era of GenAI? Learning to work with the technology, rather than against it, is a more sustainable path forward.
Let me answer whether “the idea of fair play in the context of learning — still hold sway in the era of GenAI?”
Yes. It should.
Frankly, I am in stunned amazement that anyone would ask whether “fair play” should “still hold sway.” Are they advocating unfair play? No rules at all? Are they serious?
I cannot believe I have to type this, but fraud is bad. Stealing is bad. Cheating is bad.
The article goes on:
Evidence suggests that plagiarism hasn’t significantly increased since GenAI tools became popular. Research from Stanford University indicates that cheating rates have actually remained stable, with the primary motivators for cheating being familiar issues like poor time management and overwhelming workloads, not access to AI technologies.
I left those links in should anyone want to verify what I’m about write.
For one, plagiarism and AI use are different, which I assume the authors know. In fact, the point is probably inverted from what the authors intended. If rates of plagiarism are flat, or down, it may mean that more students are using AI to generate text than using old cut-and-paste plagiarism. But that’s really minor.
The major thing — and I am going to have to put this in bold — the first link goes to a press release from Turnitin, which does not even use the word plagiarism one time. It is not about plagiarism at all. And it certainly does not say that plagiarism has not increased.
In fact, the April 2024 press release says that students are using generative AI in their academic work at alarmingly high rates (see Issue 287):
Over six million (approximately three percent of over 200 million) have at least 80 percent AI writing present
How someone gets an opinion piece published in a supposedly credible newspaper that says “Evidence suggests that plagiarism hasn’t significantly increased” while linking to a source that 100% does not say that, is a mystery. I guess the editors at the Tampa Bay Times had the day off.
I did not check the other links in the story. I did not have the patience to look, or the space to correct.
As for the other link to the “research from Stanford University,” we covered that (see Issue 337 and Issue 261). Let’s just say that I, and others, do not think this study shows what people say it does.
Also from the Fraser/Dobrin, also:
Traditionally, education has been viewed as a unidirectional process, whereby knowledge is accumulated through individual effort. This model, rooted in the era of print, assumes that bypassing specific steps, like facing a blank page or drafting an outline, undermines authentic learning.
I’m not sorry — that’s unhinged.
We’re taking exception to the ‘traditional’ idea that knowledge is accumulated through individual effort? What? How do they propose anyone accumulate knowledge without effort? And we’re actually saying that “bypassing specific steps” may not undermine “authentic learning?” You mean bypassing a specific step such as doing the actual work? Call me silly but I think that doing the work and individual effort are essential to learning. And I cannot believe this is a serious argument — again.
We should be heartened, I imagine, that the authors concede that generative AI tools have not rendered writing “obsolete.” They write:
Instead, it shifts the emphasis from initial composition to critical analysis, revision, and “prompt engineering” (the ability to effectively instruct AI systems).
For one, I cannot imagine how I am supposed to fix a car if I do not know how it’s built, what parts it has, and what they do. Or how I am supposed to ‘engineer’ a good one if I’ve never taken one apart or put one together.
Fortunately, the authors have suggestions:
Rather than banning GenAI outright and spending our time policing student behavior (a response that was more understandable two years ago), we should explore how the technology can enhance learning and advance human knowledge.
Don’t ban. Don’t police. Explore.
Congratulations — after four years and a mountain of student debt, you have achieved a badge in AI-screw-around-ery. That will come in handy, I am sure.
Just two more, I promise. One:
Transitioning to a new instructional paradigm won’t be easy. It requires rethinking assessment methods, updating academic integrity policies (not to mention the very notion of what constitutes academic integrity in light of these tools), and investing in faculty training. But clinging to outdated notions of learning while technology advances poses a greater risk.
Oh boy — rethinking assessments again. And somehow rethinking the very notion of academic integrity.
To punctuate the absurdity, let me rephrase — this new paradigm requires rethinking the very notion of theft, that the very idea of claiming for yourself what you have not earned is outdated. If someone wrote that, you would not take them seriously. And yet, Fraser and Dobrin did.
Finally:
So, is student use of AI cheating? Indeed, if that use violates an explicit policy prohibiting it. Our point is that, as educators, we must ask ourselves whether these policies actually serve our students, or whether they reflect an increasingly obsolete educational model.
Violations of explicit policy — set by subject experts and professional educators mind you — may reflect “an increasingly obsolete education model.” Got it.
I am shocked someone wrote that, at a loss to explain why some people actually believe it, and heartbroken that the Tampa Bay Times printed it.
Let there be hope, Act II
Fortunately, on February 12, the TB Times published another piece, a rare direct rebuttal, by Caroline Hovanec, of the University of Tampa. Unlike the piece to which it responds, it has sanity.
She writes, for example:
The boosters of generative AI for learning have a familiar rhetorical playbook, and their signature moves are all over [the previous] column. First, we are reassured that generative AI is just the latest in a string of educational technologies, from “the printing press, word processors, hand calculators, (to) the internet” — the implication being that if you are concerned about its impacts, your concerns are just as irrational and reactionary as if you were shaking your fist at a cloud and railing against the typewriter.
At the same time, we are asked to believe that generative AI is so revolutionary that it has rendered “obsolete” the teaching and learning practices refined over generations. Tasks such as “facing a blank page or drafting an outline” — which old-fashioned instructors like me would call “thinking” — are now to be replaced.
Preach.
Of the counter ideology, she continues:
Most importantly, students will now study “prompt engineering,” or writing for a bot, if their teachers can just stop “clinging to outdated notions” like, say, writing to communicate with other human beings.
I know - that is not about academic integrity. But it is about learning, which runs beneath why some of us care about integrity in the first place. And Hovanec’s piece is a PhD-level spanking.
She continues:
Instead, like visitors to the Emerald City, we are asked to don our tinted spectacles, enter a world of “responsible GenAI use,” “enhance(d) learning,” and “thoughtful AI integration across curriculum,” and not ask too many questions about what these phrases really mean or what is behind the curtain.
And the one question you must never, ever ask an AI-in-education booster: When we encourage young people to consult a chatbot for ideas, rather than explore and develop their own minds, when we tell them that editing an AI-generated text is just as good as discovering, through writing their own thoughts, when we imply that their words probably aren’t good enough for a human reader, so instead they can write to a machine that will repackage their words into frictionless, anodyne and utterly lifeless prose, are we cheating them of something precious?
Slow. Clap.
Here, if I squint, I can loop this back to the question the first essay asked but the second did not really answer — whether, or to what degree, using AI in academic settings is cheating.
Hovanec goes further than I do, which is not to say she is off.
I say using AI is cheating when it’s used to skip the work and take credit anyway — work that I believe is necessary for real learning and growth. I stand by my analogy that AI in classrooms is like steroids on an athletic field (see Issue 341). And by the way, I see now that I may have stolen the steroid analogy outright — see Issue 278. Either way, it fits.
Hovanec says that using AI for learning writing is cheating, but not just when it’s used to misrepresent. She says using AI is cheating more broadly — cheating students of the ability to learn writing, and of all the very important things to which that skill connects.
I think we’re both right.
Last Call for ICAI Registration
The International Center for Academic Integrity (ICAI) has issued a “last call” to register for their annual conference — next week, March 6-8 in Chicago.
Department of Corrections Department
In the last Issue, in the subheadline, I wrote about “cheating nuggets from press coverage in Mayland.” Which may have been a typo. It was. It was a typo. I meant Maryland, obviously. Sorry.