(375) Professor: AI has "Made My Workload Skyrocket, and I've Had to Make Drastic Changes."
Plus, a professor in Canada says AI cheating is a betrayal. Plus, Grammarly buys an AI company. Plus, class notes.
Issue 375
Subscribe below to join 4,766 (+14) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
Professor Says AI Has Made Her Workload “Skyrocket”
Business Insider, which ironically made a big splash recently by laying off many of its writers in a pivot to AI, has a story this week (subscription required) about a professor who is struggling to deal with AI in her classes.
Risa Morimoto, who the article says is, “a senior lecturer in economics at SOAS University of London,” has been teaching for 18 years. It’s an “as told to” story, which means the narrative is more or less in Morimoto’s voice.
Students always cheat
That’s how the article starts. But, Morimoto says, AI has made it different. She says:
I believe some of my students have been using AI to generate essay content that pulls information from the internet, instead of using material from my classes to complete their assignments.
AI is supposed to help us work efficiently, but my workload has skyrocketed because of it. I have to spend lots of time figuring out whether the work students are handing in was really written by them.
I believe she is right on both counts.
She says when she started teaching, she used exams. But students “were anxious” about exams, so she switched to use more essay writing for assessment. She says:
It worked well — until AI came along.
She continues:
Cheating used to be easier to spot. I'd maybe catch one or two students cheating by copying huge chunks of text from internet sources, leading to a plagiarism case. Even two or three years ago, detecting inappropriate AI use was easier due to signs like robotic writing styles.
Now, with more sophisticated AI technologies, it's harder to detect, and I believe the scale of cheating has increased.
Further:
I'll read 100 essays and some of them will be very similar using identical case examples, that I've never taught.
These examples are typically referenced on the internet, which makes me think the students are using an AI tool that is incorporating them. Some of the essays will cite 20 pieces of literature, but not a single one will be something from the reading list I set.
While students can use examples from internet sources in their work, I'm concerned that some students have just used AI to generate the essay content without reading or engaging with the original source.
Sounds right.
She also says:
AI tools are easy to access for students who feel pressured by the amount of work they have to do.
That’s also true but it makes me wonder why so many teachers default to the idea that contemporary cheating is driven in any significant way by pressure and lack of time. The truth is that AI tools are just as easy to access for students who don’t want to bother with the work, for whatever rationalization they devise.
But whatever. Our professor says:
During the first lecture of my module, I'll tell students they can use AI to check grammar or summarize the literature to better understand it, but they can't use it to generate responses to their assignments.
Right away, I thought — good luck with that. Tell me where the line is between using AI to “check grammar” and using AI to “polish” writing. Tell me where the line is between using AI to generate a summary and using that summary in your answers. It seems to me to be an unenforceable, and nearly incomprehensible, distinction.
Predictably, she writes:
Over the past year, I've sat on an academic misconduct panel at the university, dealing with students who've been flagged for inappropriate AI use across departments.
I've seen students refer to these guidelines and say that they only used AI to support their learning and not to write their responses.
Of course. If you tell students they can use AI for A, but not B, they will say they only used it for A. And what then? Well, she makes my point here:
It can be hard to make decisions because you can't be 100% sure from reading the essay whether it's AI-generated or not. It's also hard to draw a line between cheating and using AI to support learning.
Hard to draw a line is an understatement. Once the camel’s nose is under the tent, as the saying goes, you’re in trouble. It’s a really difficult spot to be in, I concede.
Saying she’s going to change her assessment methods again, she writes:
I'll ask my students to choose a topic and produce a summary of what they learned in the class about it. Second, they'll create a blog, so they can translate what they've understood of the highly technical terms into a more communicable format.
My aim is to make sure the assignments are directly tied to what we've learned in class and make assessments more personal and creative.
If the goal is to address the misuse of AI in student work, I’m not the least bit sure how this will do that.
At the end of this article, the school gave a comment:
In a statement to BI, a SOAS spokesperson said students are guided to use AI in ways that "uphold academic integrity." They said the university encouraged students to pursue work that is harder for AI to replicate and have "robust mechanisms" in place for investigating AI misuse. "The use of AI is constantly evolving, and we are regularly reviewing and updating our policies to respond to these changes," the spokesperson added.
I’m glad the school has “robust mechanisms” for investigating AI misuse. It would be way worse if they didn’t.
Anyway, I feel for this educator, and the millions of others in similar situations. AI has complicated things profoundly.
Professor in Canada Says of AI Cheating, “You Feel Betrayed”
CTV has a nice story about a professor in Canada, Ed McHugh. Professor McHugh talks about cheating with AI. He calls it a betrayal.
The story starts:
As artificial intelligence rises in popularity, one Halifax professor says he’s noticed an unsettling trend over the last two years – students are using it to cheat on assignments.
“Having one student doing it is serious, but it’s increasing,” said Ed McHugh, a business and marketing professor at Dalhousie, Saint Mary’s and Mount Saint Vincent universities.
It continues:
McHugh recently posted about the issue on social media after he noticed a number of students cheated on an assignment. He could tell they had used AI to do the work for them.
Here’s his post on Facebook:
Welcome to today's world of education. I am currently teaching two post secondary marketing courses. (The institution doesn't matter because it is ubiquitous.) Here was the 'homework' assignment: "Watch this video about a CFO (his name is Adam Smith) and his reaction to Chic-Fil-A. Did he deserve the treatment he deserved?" Obviously the assignment focuses on our world of social media and its ramifications. A significant number of students chose not to watch the 7- 8 minute video and submitted a complete history lesson on Adam Smith who was an 18th century Scottish economist ("the invisible hand"). Obviously, it had nothing to do with the assignment. They cheated using AI. I have now called them on it. It is a bit of a battle currently in our classrooms. So disappointing. I am not against AI assisting all of us, but plagiarism with no credits is a serious matter.
I am the least surprised person in the world.
The Professor told CTV:
“They didn’t bother to watch the video. They just saw the words Adam Smith, did AI on Adam Smith, and gave me back a paragraph about Adam Smith the economist, which had nothing to do with the assignment.”
Shocker.
Continuing:
“I feel disappointed any time a student cheats. As an educator you feel some sense of sadness and anger because you feel betrayed and you feel that they think you mustn’t be that bright to catch some of this stuff,” said McHugh.
I love this because here’s a professor clearly — in my view — putting the betrayal at the feet of the cheating, an outcome of student choices.
I like it because it’s a push against the cataclysmically absurd idea that, in some odd universe, checking student work for AI is the betrayal of student trust. People actually believe that.
Instead, I think this professor is right. If distrust is present in the teacher-student dynamic, it’s because the student shattered it by breaking the rules, by not respecting the teacher, the school, their classmates, or the learning process. Locking your doors may evidence mistrust. But certainly far less than stealing does.
I am off topic. The story continues:
However, McHugh says there are ways for educators to detect the use of this software, and some uses of AI are obvious.
Another thing he’s noticed as a professor is the decline of grammar skills over the last 30 years, which he says can actually be useful when trying to determine if someone used AI. If a student who struggles with grammar hands in a perfectly-worded assignment, McHugh says they likely used AI.
“When you have an email that is not written very well grammar-wise, but the attached assignment is in perfect English,” adds McHugh.
McHugh also says the AI software he’s noticed students using follows a certain format in its answers, so when someone copies the responses from it, it’s noticeable.
More:
But there are ethical ways to use the software in schools. Some students like Juliette Savard use AI to enhance their writing and fact check.
“Enhance their writing.” Let me ask, if AI enhances student writing, what is being graded?
Professor McHugh comments on AI policies:
“So far, all of the policies I have read are not strong. The policies need to be stronger and a lot of them are discouraging the use of other softwares to detect whether AI has been used because those softwares aren’t perfect either,” says McHugh.
He’s right. Most are not strong. And most do discourage the use of AI detection systems, leaving professors without backup or warnings.
McHugh tells his students:
“On your first assignment, if you are caught and it’s proven, you get zero for the assignment,” explains McHugh. “If you’re caught again in the course, you get zero for the course.”
Grammarly Buys AI Company
Grammarly, the bane of teachers and AI detectors everywhere, has reportedly struck a deal to buy an AI e-mail company, Superhuman.
It’s in The Cheat Sheet because Grammarly is not your innocent, friendly little grammar assistant anymore. It’s now a full-on AI workforce company, with what already seems to be a core business proposition to “help” people write — that is to say, do the writing for them. In business, maybe few people care. In education, that’s a very large problem. And honestly, it has been for some time.
I also note that in the coverage linked above, Grammarly is described as:
AI writing assistant Grammarly
And that with this deal:
Grammarly envisions the integration of its AI writing software into Superhuman's AI-powered email efficiency tool
AI writing assistant. AI writing software. That’s what Grammarly is. Maybe now the company will stop pretending it’s an education provider.
Class Notes:
Summer Break
Reminder, I am planning to take some time off from The Cheat Sheet, July 15 to August 15, and asking for submissions — from virtually anyone, on virtually anything related to integrity — for the Issues in that time.
If you have a product you’d like to share, an article you’ve written, or research you’ve published, or just comments to circulate about any topic we try to cover, please send it in. Send them in.
The Cheat Sheet has always been open to submissions. Now, we’re more or less counting on a few.
To send something or to inquire, just reply to this e-mail.
Verify My Writing
In the last Issue, I shared a few details about my new project — Verify My Writing. But I buried it at the end of a long issue. So, I am sharing it again.
If you care about authentic, human writing as a craft, an exercise in learning, or an essential element of human communication — if you agree that writers should be recognized and rewarded for their effort — you can help us build it by visiting our Kickstarter page and making a pledge of just $50.
Our Kickstarter isn’t about raising money, it’s about showing support for the idea. The number of pledges matters more than the amount we raise. Plus, backers on Kickstarter get this cool digital badge showing that they support actual, human writing:
I deeply value and respect the art of writing. And I appreciate you giving this a look.