(387) Two Submissions: "Agnostic" on Integrity Research and Real Feelings on News Coverage
Issue 378
Subscribe below to join 4,766 (+6) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
Below is a submitted opinion piece from Joel Heng Hartse, a senior lecturer at Simon Fraser University in British Columbia. It is unedited, and I thank him for sending it in.
Why I take an “agnostic” approach to academic integrity research
I’ve always been a very scrupulous person. I find even white lies in everyday life to be very difficult, and I’ve always thought of myself as academically honest*. I’ve been very surprised, therefore, by my attraction to what I’ve been calling an “agnostic approach” to academic integrity research. It’s not that I think cheating isn’t wrong; it’s that I’ve found think that starting with a priori categories of cheating vs. not cheating in research (NB, research, not policy or practice) doesn’t help us gain insight about how people understand what it is they’re doing, which is what I find most interesting about any inquiry in social science.
(*I was horrified to only recently realize that my use of Cliffs Notes to “finish” Bleak House my senior year of high school would absolutely be considered academic dishonesty by my own standards now.)
There are two concepts I like to think about to help me make sense of this. The terms are “literacy brokering,” which was coined by Mary Jane Curry and Theresa Lillis, and “contract cheating,” by Thomas Lancaster and Robert Clarke. They both describe ways that people get help with academic work. (And for whatever reason, they were both terms first coined in 2006.)
For Curry and Lillis, “literacy brokering” is the work done by people who help professional academic writers ensure the things they write are ready for publication. This can include editors, reviewers, academic colleagues, and even friends and family members. For academics, this is quite common. We share texts with people who alter them in some way all the time, because we want them to be better before we turn them in for publication.
For Lancaster and Clarke, “contract cheating” is the “outsourcing of student work,” or when students pay someone else to complete all or part of an assignment. This is one of the most time-honored forms of academic misconduct, though it used to be much more common in the pre-chatGPT age.
The funny thing about these terms is that the people who do literacy brokering and the people who do contract cheating operate in similar ways. They’re usually found through interpersonal or professional networks, or are people close to the writer; they sometimes advertise their services; they often charge a fee for their services; and they work on something that they will not ultimately be considered the “authors” of – something that will be published or turned in without their name on it.
We can understand a few things when we consider these paradoxical similarities.
First, literacy brokering is a construct developed to describe how professional academic writers get help – people who are professors or graduate students, writing up their research, and seeking to have it published in academic journals. Contract cheating, on the other hand, is a concept meant to describe the unethical behavior of some college students. In a way, these two terms are apples and oranges: They describe work done in different contexts, and in some ways for different purposes, even if the work is often similar.
Second, we can thus sense a kind of double standard operating here: Literacy brokering is considered legitimate, but contract cheating is not. On the surface, this does not seem fair, but we need to consider that students are apprentices in your field of study, and that apprentices need to show that they can play by the rules – kind of like the chefs in the documentary Jiro Dreams of Sushi who have to make a simple egg sushi over and over again before they can move on to being trusted with more complicated dishes.
Again, I’m not arguing that cheating isn’t wrong, but I’ve talked to enough students over the course of my various research projects on academic integrity to see that many of them believe that what they’re doing is closer to “getting help” than cheating. I’ve talked to students about their use of Chegg, of electronic translation, of paid editors, of subject matter tutors (some of whom “help” much more than professors would like) and of GenAI. While some freely admit that they are violating rules and even their own ethical standards – I don’t want to sell this short, this absolutely does happen, a lot – many others described their use of these “tools” as something else: an aid to learning, a “convenient” alternative to interacting with professors or TAs, or a free editing service, to name a few.
I’m not here to say it’s good that students turn to these things; for the most part I do this research in order to help universities figure out how we can better support students’ learning so that they feel less temptation to seek illicit “help.” But I also feel like I’ve learned a ton about what students do and why without going into the research without using words like “cheating” or “academic misconduct.” I’ve found students are very willing to have conversations about the moral grey areas they find themselves in if I frame the projects around more neutral-sounding questions like “have you ever had to seek help with your academic work?” or “how do you use generative AI to complete writing assignments?” Many students are happy to talk about their moral dilemmas (or ambivalence) when it comes to using the tools marketed to them as “help” and seen by most of their professors as cheating.
So, while I have fairly strict policies about, for example, not using AI in the writing classes that I teach, I’m trying, in my research, to be open to whatever students have to tell me about how and why they seek assistance from outside sources to complete their academic work. Not because outsourcing homework is efficient, or AI writing is superior to human tutoring, or the other nonsense that some companies spout in order to encourage students to buy their products – but because I hope that by understanding what students believe about these things, we can help create conditions that will encourage them to make better choices.
Dr. Joel Heng Hartse is a Senior Lecturer in the Faculty of Education at Simon Fraser University in British Columbia, whose research on academic writing has appeared in publications including the Journal of Second Language Writing, the TESL Canada Journal, Composition Studies, Across the Disciplines, and the Journal of English for Research Publication Purposes. His books include Dancing about Architecture is a Reasonable Thing to Do (Cascade, 2022) and TL;DR: A Very Brief Guide to Reading & Writing in University (UBC Press, 2023). www.joelhenghartse.com
This piece contains elements of “Writing with Help: What is and Isn’t OK” by Joel Heng Hartse, which originally appeared in Discipline-Based Approaches to Academic Integrity, reused here under a Creative Commons license. https://creativecommons.org/licenses/by-nc-sa/4.0/
Below is a submission from Dr. Anthony Summers, Lecturer in Nursing FHEA, University of the Sunshine Coast, Australia. It is unedited. Although, the headline is not his; I wrote it.
Thank you, Dr. Summers, for sending this.
Real Feelings About Cheating Coverage
I work at a regional university in Australia, I teach nursing students, and deal with academic misconduct issues within the School. Because of my interest in all things related to academic misconduct, a colleague forwarded me a copy of this article published by ABC News in Australia: Murdoch University student fights accusation of illegal AI use in assignment - ABC News.
The title of the piece automatically offends me, as it mentions the illegal use of AI. As far as I am aware, there is no legislation making the use of AI illegal. No State or Territory in Australia and no country in the world has yet to make the use of AI illegal. There are reports from China, where the country shut down the genAI apps to stop cheating during national exams (China Temporarily Shuts Down AI Apps to Stop Cheating During National Exams), but this seems to be the closest any country has come to making AI illegal.
Then there is the caption under the photograph of the student at the top of the page, which contradicts the legality of AI, stating the student wants to prove he used prohibited AI to complete an assignment. Now, whether something is illegal to use and something that is prohibited from use are two different things. Things can be prohibited yet still legal to use. Prohibition of something often means that there is reason to prohibit its use in certain situations, in this case, the use of genAI to create work for an assignment. A reasonable prohibition from a university, which is asking students to write an assignment to demonstrate their knowledge about a topic. Not a machine’s knowledge of a topic.
The second and biggest offence I take from this article is not the article itself, but with the student. The student is a nursing student, someone from my profession. A profession that prides itself on its integrity and honour in caring for patients. I am not going to delve into the case as there seems to be a lot of back and forth as to what is and isn’t allowed, and the fact that the student is still able to pass their degree is something they should be happy about. But, nursing needs to produce graduates who are honest and have demonstrated the required knowledge and understanding to be safe practitioners. Something that can’t be assured if a student has used a genAI tool. Also, if a graduate nurse does not have this knowledge, like many healthcare graduates, if they make a mistake dealing with a patient, there is a risk of serious harm to the patient, potentially fatal.
In my work, I am now seeing a lot of cases where students are being found to have a genAI tool to help with their work, whether sanctioned or not. Most of those students who are using a genAI tool are failing their courses and are not able to graduate. There are also many now who are facing expulsion from the university due to repetitive use of genAI. This is no longer an empty threat; it is happening.
The expulsion of students who repeatedly fail to demonstrate learning in assessments, due to using genAI, is something I applaud and something more universities should be doing. Failing to maintain the integrity of assessments and to allow the blatant use of tools to help a student cheat needs to be called out, and students face real consequences for their actions.
Back to the article, which does go on to state that the detection of AI is a fraught issue. Which is not wrong. AI detection is hard, and while the quality of AI checkers has improved and provides reliable results, as discussed in many issues of the Cheat Sheet, many universities do not use them, or kowtow to the popular opinion that they are unreliable and shouldn’t be used.
Where I work, we do use Turnitin’s AI checker, often only as a guide and never as a single point of accusation. If an assessment is thought to have genAI content, secondary characteristics of genAI are looked for. I was lucky enough to present on these secondary characteristics at the Nurse Education Today International Conference in Singapore in November last year. The secondary characteristics to look for include exploring the metadata and formatting of the document, the references, the text used, etc, all things that may indicate that the text was not written by a human. As an investigator and with other colleagues, I work closely with, we are becoming more and more proficient at the detection of these secondary characteristics, and our success in getting results supported by the university and not overturned on appeal has increased.
The article does cite a student who says, students are taking pre-emptive action against accusations of cheating. Students are doing their assignments on a Google document, so there are time stamps for whenever anything has been written, deleted or changed to have proof you’ve written everything yourself. This can only be a good thing. It will save investigators wasting lots of time examining work to determine whether cheating has occurred or not. But, also, students are learning to write and have evidence to prove they are writing the assessments themselves. When students are writing for themselves, they are learning, they are synthesising knowledge and critically thinking about what they are writing to ensure they are demonstrating their knowledge and understanding. Which is what education is about.
In the final paragraphs of the article, the student states that his latest appeal to the university has failed, and he is now seeking legal advice. Stating it was about clearing his name and making a point that the university's processes are impersonal. Working at a university, I can understand how university processes are very impersonal, but this does not mean they are not fair; often, they are very fair, if not more biased to the student than to the university. If the university has followed its processes fairly, then in reality, there is little the student can complain about.
The final quote from the student is a kicker, “…I really believe that they’re not 100 per cent informed of the impact of what they do, and how that can have [an affect] on people.” To me means that this student has no idea what a university is about. Students attend university to learn and/or gain insights into a profession they wish to join. Graduates are not professionals when they graduate; they are people who have learnt and demonstrated the required knowledge to enter that profession. The university, through assessment of knowledge, has agreed that this person is a safe and competent practitioner to enter a profession and learn whilst working to become a fully-fledged member of that profession.
Universities have policies and procedures in place to ensure that their graduates are these safe and competent practitioners. They are also 100% aware of any consequences of not doing this; in healthcare professions, it could lead to the death of a patient if these standards aren’t met. There would be similar consequences in other professions. Universities do everything they can to ensure that quality standards are met, and the impact on those who do not meet these standards is supported and helped through the consequences of their actions. To state that a university is unaware of the impact its policies have on people is naivety.
If you have news, thoughts, or other information about academic integrity to share, please send them over. Thank you.