(363) Reviewing 100 Academic Integrity Policies
Plus, Chegg lays off more staff amid massive loss. Plus, a little round-up of exam policies from New Zealand.
Issue 363
Subscribe below to join 4,657 (-1) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.
The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.
Academic Review of Integrity Policies
Two of my favorite people — and two of the best voices in academic integrity — published a paper earlier this year reviewing 100 academic integrity policies of higher education institutions in the United States.
The authors are Courtney Cullen and Greer Murphy of Georgia Tech and the University of California, Santa Cruz, respectively.
I asked the authors to send me a summary of the paper that I could share here. Being the academics they are, they sent a full 1,234 words. I’m sharing excerpts and the full summary.
To start, the authors say:
The study reviewed 100 academic integrity policies from institutions across the United States with a framework that identifies core elements of “exemplary” academic integrity policy
Let me tell you, I’ve tried to pull and look at these policies. It ain’t easy. That they found 100 and were able to look at them within a framework is impressive.
Here is their key summary paragraph:
Overall, Cullen and Murphy found that most policies do not present their expectations using accessible language. Rather, many policies featured reading complexity scores far above what an average undergraduate could easily comprehend. In addition, they identified inconsistencies in how many policies balanced–or didn’t balance–starting out aspiring to restorative approaches only then to default back to punitive descriptions of process. Terms that were reminiscent of criminal justice (guilty, innocent, accusation, allegation, etc), which in some ways is historically understandable and in other ways unnecessary in our current moment of academic integrity in higher education, abounded.
Personally, I am less worried about policies being echoes of criminal justice than the authors are. I’d prefer to be overly hard in words wherever possible, to evoke seriousness and perhaps serve as a deterrent. Although I do see this point. And the inaccessibility issue is inexcusable. It defeats the point of aggressive language having any impact whatsoever.
This is also important, from their summary:
For policies that include any significant level of detail as to how suspected academic misconduct is handled, this information came mostly in the form of proscriptive (e.g., this is what you can’t do, this is how not to do work with integrity) vs. prescriptive (e.g., this is what you can do, this is how you can complete work with integrity) guidance.
They continue:
the public perception is an important part of campus culture and any institution’s ability to uphold and protect academic integrity. Integrity/conduct administrators can get a bad rap as being out to punish for punishment’s sake–and for that reason, we argue that it behooves academic integrity administrators to be even more deliberate and careful with the policy language that we use.
Absolutely right. To the extent that the language in a policy cuts against the ability of integrity staff to do their best work, it’s a problem.
The authors also:
believe policy language is a crucial specific piece of the general academic integrity perception puzzle, and they want everyone in higher education to pay closer attention to it.
I think that’s right as well.
In their review, Murphy and Cullen also say, relating to the punitive nature of policies:
some of the worst offenders are MSIs and Community Colleges, institutions that have historically championed access.
MSI is minority serving institutions.
Three other findings from their work are also important:
Unfortunately, most policies in our study overwhelmingly turn almost immediately to legalistic language; the majority of approach codes (267 of 469, or 57%) oriented that way. Legal, rule, and retributive language comprised 65% of the approach.
And:
Some of the likeliest institutions to use legalistic or retributive language were those with Honor Code or Council integrity systems.
And also:
Another aspect of policy approaches that concerned us was the lack of autonomy students seemed to have throughout. Authority for decision-making rests with the instructor of record or with staff and students who oversee policy administration. Reported students were rarely included in meaningful decision-making beyond providing a “plea” regarding their behavior.
I found this important as well:
Support is rarely mentioned in academic integrity policies. Out of 100 policies, there are only 27 mentions of support for academic integrity specifically.
It’s clear that even when it comes to the very basics of integrity policies and practices, US schools still have a tremendous amount of work to do. And, to me, it speaks to how much — that is to say, how little — school leaders care about the rigor of their programs and the development lessons they’re actually teaching.
Finally, I’m sharing this, the authors’ conclusions and recommendations, unedited:
Our analysis reveals that policies across the United States share several common characteristics. First, almost all have a preamble or introduction espousing the importance of integrity and a commitment to education. Second, almost all revert to legalistic and retributive verbiage in describing their process.
Which brings us to our first recommendation: follow through on aspirational language and the promise of policy orientation.
Based on the current analysis, our second recommendation concerns the role of lawyers. Legal experts should have a say and provide input. Their advice matters. In our view, that does not give lawyers carte blanche to influence institutions to write the kind of contract–academic integrity policy—only other lawyers could understand.
Another commonality of U.S. academic integrity policies, based on our analysis, is their tendency to age poorly. As evidenced by Flesch-Kincaid grade-levels, policies have not changed to reflect the language and expectations of our current era. With only 39% of institutions apparently updating policies since 2020, we can also infer these policies likely fail to address online academic misconduct that took place during COVID-19 pandemic, let alone use of GenAI or other artificial intelligence tools. Still more concerning is that only 1% of policies we reviewed included language requiring that policy to be regularly reviewed and updated…Not surprisingly, our next recommendation is to dust off the integrity policy at every institution by updating regularly and setting a standard of review to ensure the policy meets institutional goals and values.
The final, perhaps least surprising recommendation we have is to prioritize academic integrity by making it someone’s job. By assigning academic integrity as the essential duty of a named individual or office, institutions can ensure the policy is applied fairly and consistently.
Make integrity someone’s job. Ideally, more than just one someone. Show me what you spend money on, and I’ll show you what you care about.
Chegg Lays Off More Staff, Closes Offices
In the last Issue we noted that investors were still buying shares in cheating company Chegg. It was a curious thing, especially with the company’s quarterly report due out in days.
Well, the report landed yesterday. And — sure. Reuters has the story, and headline:
Chegg to lay off 22% of workforce as AI tools shake up edtech industry
Yikes. From the story:
Chegg said on Monday it would lay off about 22% of its workforce, or 248 employees, to cut costs and streamline its operations as students increasingly turn to AI-powered tools such as ChatGPT over traditional edtech platforms.
By “turn to AI-powered tools,” everyone knows they mean “in order to get the answers and not do the work.” As I’ve said for some time — no point in paying Chegg for what OpenAI will give you for free.
This is not the first round of layoffs at Chegg — see Issue 303, Issue 218, or Issue 323.
More from Reuters:
As part of the restructuring announced on Monday, Chegg will also shut its U.S. and Canada offices by the end of the year and aim to reduce its marketing, product development efforts and general and administrative expenses.
The majority of the resulting charges of $34 million to $38 million are expected to be incurred in the second and third quarters.
Nearly one in four of the remaining Chegg workers are gone. Offices are closing. And if you’re Chegg, that may be the good news.
Here’s what has driven the firings and closings, from Reuters:
[Chegg] also reported first-quarter results on Monday, saying subscribers declined 31% in the period to 3.2 million. Revenue declined 30% to $121 million, as its subscription services revenue fell by nearly a third to $108 million.
My goodness, Zemgus Girgensons.
He plays hockey.
Speaking of pucks, a 31% drop in subscribers in 90 days is pretty pucking shocking. And with Chegg cutting marketing, and product development, the fat lady is clearing her throat.
Chegg still has value. And someone will come in and scoop it up, pick it apart. But when it finally falls, I won’t be sorry at all.
Hope you shorted.
New Zealand and Exams
An outlet called The Spinoff, has a brief article about changes to exam policies in the age of AI — this age of AI-assisted cheating.
The piece jumps off from the changes at Victoria University of Wellington, where some exams at the Law School are reverting to in-person, hand-written formats. We touched on this in Issue 362.
There is not much new. But this, I thought, was worth sharing:
While Victoria pushes back against AI in exams, other universities are taking different approaches. At Auckland University, all law exams remain digital, backed by lockdown browsers (which deny access to unauthorised websites) and in-person invigilators. At Waikato it’s the opposite: all law exams are handwritten, with exceptions for students with accessibility needs. It’s a mixed picture when it comes to other disciplines, too. An Auckland University spokesperson tells RNZ that 61% of all exams were digital in 2024. At Victoria University, it’s only 30%.
Finally, I’ll share these two paragraphs:
While universities are going old-school to beat AI cheating in exams, many are relying on tech solutions to catch students who use AI to write essays at home. Massey University has leaned heavily on Turnitin’s AI-detection feature, but its rollout has been mired in controversy. Last April the Sunday Star-Times reported that at least 20 Massey students had “claimed amnesty” in exchange for confessing to using ChatGPT to cheat. Provost Giselle Byrnes said the university uses Turnitin’s AI-detection “as a last resort” but a number of students said they had been accused of cheating after receiving false positives, the Manawatu Standard reported.
A recent Reddit post from another New Zealand student detailed their struggle with faulty detection tools. Under the headline “AI has ruined my university experience”, the user wrote that they “truly put a lot of effort into my work [but] recently all of my assignments have been coming back as AI-generated … I’m sick of looking like a cheater, and I know none of my tutors believe me when I say I don’t use ai.”
It’s odd to see a swipe at detection systems — in this case, Turnitin — followed immediately by news about students confessing to using ChatGPT to cheat.
And, as predictable as the sunrise, we get the whole “I was not cheating, the AI system is wrong” thing. Two things to say about this.
One, sourcing a Reddit post is terrible.
Two, if that’s true at all, I’d wager my own money that if all of a student’s assignments are coming back as AI-generated, the student is using AI on their assignments. It’s possible they don’t realize it, that they’re using Grammarly or Quillbot and thinking that those are just tools they can use for editing or whatever. But if AI detectors are flagging your work on the regular, you are probably using AI on the regular.
Institutional or Corporate Subscriptions
Did you know that your institution or company can support The Cheat Sheet with an institutional subscription? It’s $250 a year, whereas an individual, personal subscription is about $80 a year.
The good news is that by being a corporate or institutional subscriber you get absolutely nothing. There is no difference between being a paid subscriber — at any level — and reading The Cheat Sheet for free. But you do get to feel good about it. I hope.
So, if your company, or school, or organization reads, shares, or discusses The Cheat Sheet, you may wish to consider busting out the corporate AmEx. Or not. You know — as you like.