(NY25/26) Best and Worst in Academic Integrity 2025
Happy 2026
Welcome to 2026 and another edition of the famous Best and Worst in Academic Integrity for the Year - 2025.
The Cheat Sheet began 2025 with 4,271 subscribers. We ended the year with 5,033 total subscribers, about 70 of which are paid supporters. I am honored and humbled.
We put out about 83 Issues of The Cheat Sheet in 2025. There was a summer break this year, and some travel. But 83 Issues, with about three stories each, is nearly 250 stories, analysis, and the occasional rant.
Most of all, thank you for reading, sharing, and subscribing. I wish the best for each of you in this coming year. Happy 2026.
Most Read
Here are the five most read Issues for 2025:
(365) Students at University at Buffalo Ask School to Turn Off Turnitin
(367) Study: AI Detectors Are Accurate, Flag Work with AI “Polish”
(374) Two Very Important Articles
Biggest Stories of 2025
There were several big stories related to academic integrity, AI technology, cheating, and connected areas this past year. Here are a few.
It was rhetorical, but important. From Issue 346:
TEQSA, the higher education regulator in Australia, reportedly asked if, due to concerns about assessment security and AI use, college degrees obtained online should be required to indicate the mode of attainment.
How long until this isn’t rhetorical?
And, from Issue 358, Study.com, a company that aims to help adult learners capture and transfer college credit toward degree attainment, announced:
To help reduce the anxiety many learners feel, especially around testing, Study.com has removed proctored exams from its courses, allowing students to take open note and open book tests.
College credit, zero oversight. Unbelievable.
In Issue 409 we covered that LMS provider Anthology (Blackboard) said it cannot stop AI agents from completing assignments and assessments in its system. That feels very big.
But by far the Biggest Story of 2025 was in Issue 364 — a new law making selling cheating services illegal in Georgia barring people, companies, and organizations from selling:
any work product to a student or examinee in a substantially completed form that could, under the circumstances, reasonably be considered as being, or forming a part of, an assessment task.
Big win for the Credential Integrity Action Alliance (CIAA). Big win for everyone.
Should Have Gotten More Attention
In preparing this Issue, there were several things that I thought should be shared again — getting more attention than they got the first time. In no order, here are a few of those.
From Issue 333, the story that “Security cameras and alarms effective at deterring burglars, say burglars.” In that Issue, I wrote:
in education, people actually say out loud that they do not want to lock doors or activate alarms because neither one will stop a highly motivated thief. They also say they are worried that, should someone innocently try to open the door to your home and find it locked, or set off an alarm, that they’d be accused of being a burglar.
So, they unlock the doors and turn the alarms off.
Just a reminder, from Issue 108, from 2022, that professors showed they could cut down on cheating when they said they had ways of detecting it, even though they did not.
Back to this year, news from Issue 339, that regulators in Australia revoked a school’s credentials for cheating in online exams, should have been bigger news.
Troy Jollimore’s amazing piece in The Walrus, from March, in Issue 346 – “I Used to Teach Students. Now I Catch ChatGPT Cheats” is still a must-read that merits further attention.
Likewise, another must-read essay from Dan Sarofian-Butin, “professor in the School of Education and Social Policy at Merrimack College” deserves more. That’s in Issue 357.
Continuing with billboard-level writing from the past year, this story, an essay from Will Teague, an Assistant Professor of History at Angelo State University in Texas is on the list. That’s in Issue 410.
Bad Policies
One of the more consistent, albeit indefensible, patterns from 2025 was schools doing incredibly dumb things to attempt to deal with the tidal wave of AI-driven cheating. Here are a few of those, for the record.
In Issue 396, Rowan-Cabarrus Community College in North Carolina not only announced a public partnership with the cheating provider Grammarly, but it also shared that the school lets students use AI, then check their “work” for an AI percentage before turning it in — using Grammarly’s AI detector, of course. The school’s policy is to not accept work that’s more than 20% AI-created, which they tell students ahead of time.
That’s insane. And I am not the least bit sorry to say it.
Also from Issue 396, the University of Colorado, Boulder announced it had turned off its AI detection system, letting students cheat at their pleasure, subject only to whether their professors can spot AI-created text. Which they cannot, if they even care to try. Colorado is not the only one, but it’s beyond embarrassing.
Joining Colorado in its gleeful ignorance was The University of Cape Town. From Issue 391, the coverage included:
The University of Cape Town (UCT) has scrapped the use of artificial intelligence (AI) detection software as it shifts towards ethical AI use.
Speaking to Newzroom Afrika, director of UCT’s Centre for Innovation in Learning and Teaching, Sukaina Walji, explained that AI detection tools aren’t reliable enough to use confidently.
Ignorance is bliss, I hear. In this case, the bliss is for students. Less so for everyone else.
The parade of looking away continued with Western University (CAN), which also turned off its AI detection system. The school also reported that less than 3% of its academic integrity cases in 2025 involved use of AI. Those are related, as you cannot see with your eyes closed. That’s in Issue 332.
Issue 361 noted that the University of Waterloo (CAN), also announced it was not going to check for AI-created content by students, at least not with technology. They turned it off. The school said:
the decision was made to discontinue the AI detection tool in Turnitin. This function will no longer be available to University of Waterloo users as of September 2025.
At least Waterloo cited the cost of the service in its statement. Cheap and easy is, you know, cheap and easy. The school’s statement was also littered with bad, even nonexistent, research. Terrible.
Issue 372 covered policy at Russell Sage College in New York, which also chose ignorance over information by turning off its AI detection. From the news coverage:
“We do not allow the use of AI detectors for the academic integrity process. Those detectors have been shown to be less than 100% reliable, and we do not hold students accountable without proof,” said Russell Sage Dean of Undergraduate Studies Andrea Rehn.
At least the ignorance is consistent, I guess. When you refuse to know things, you don’t know things. Also, for fun, see if you can spot the correlation between the Russell Sage decision and this, also from the coverage:
At Russell Sage last year, only one student faced an “academic integrity” charge for misusing AI — down from six students in 2022-2023. That’s out of more than 2,000 students at the college.
Absolute malpractice. By choice.
In Issue 403, we noted that CUNY (City University of New York) has a unique program designed to get students prepared for medical school. When news broke that students were cheating in it — leaving an exam and logging in remotely later — CUNY’s consequence was for the offending students to take an ethics class. That’s it.
Then there’s Elon University (NC), which welcomed the former CEO of flailing cheating provider Chegg to a leadership post at the school — an Advisory Board at the School of Business. Aside from the obvious irony of the CEO of a company which has lost 98% of its value advising a Business School, asking an executive at Chegg to advise your school is bad, no matter how you look at it.
And finally, Albany State University (GA) welcomed Chegg to campus in 2025, in Issue 341. This caused me to look up the school’s academic integrity policy, which has at least three obvious grammar errors. Come on. At least pretend to take it seriously.
Quote of the Year
This is category one of my favorites.
There are several strong contenders, both positive and absurd.
Starting with the strong ones, the below is from Troy Jollimore’s amazing piece in Issue 346 – “I Used to Teach Students. Now I Catch ChatGPT Cheats.”
I once believed my students and I were in this together, engaged in a shared intellectual pursuit. That faith has been obliterated over the past few semesters. It’s not just the sheer volume of assignments that appear to be entirely generated by AI—papers that show no sign the student has listened to a lecture, done any of the assigned reading, or even briefly entertained a single concept from the course.
It’s other things too. It’s the students who say: I did write the paper, but I just used AI for a little editing and polishing. Or: I just used it to help with the research. (More and more, I have been forbidding outside research in these papers for this very reason. But this, of course, has its own costs. And I hate discouraging students who genuinely want to explore their topics further.) It’s the students who, after making such protestations, are unable to answer the most basic questions about the topic or about the paper they allegedly wrote. The students who beg you to reconsider the zero you gave them in order not to lose their scholarship. (I want to say to them: Shouldn’t that scholarship be going to ChatGPT?)
There’s also this, from Jeremy S. Adams, “an award-winning civics teacher and writer from Bakersfield, California,” in Issue 393:
Last spring, some of my graduating seniors felt obligated to take me aside before graduation, as if I were a naive child, and pronounce a dark truth in the era of widely available AI technology: You teachers can’t win. We will find a way to take the easy path. Every time.
At the graduation ceremony (which I did not attend), other students approached a colleague of mine, not in a spirit of guidance but of arrogance: We cheated the whole year, and you never caught us. They had escaped the noose of accountability. They weren’t relieved. Instead, they reveled, seeming to relish being scholastic frauds.
A few more quotes that stood out this year include, this, in Issue 397:
“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. “Anything you send home, you have to assume is being AI’ed.”
And this, from Issue 391:
“We risk an ever-increasing number of students who hold certificates that fraudulently certify their mastery of skills and content knowledge that some may have only barely attempted.” Andy Carolin, an associate professor at the University of Johannesburg’s English department
And one more from the “I really hope someone is paying attention department” — this, from Clay Shirky, a vice provost at NYU, writing in the New York Times, covered in Issue 392:
Our A.I. strategy had assumed that encouraging engaged uses of A.I. — telling students they could use software like ChatGPT to generate practice tests to quiz themselves, explore new ideas or solicit feedback — would persuade students to forgo the lazy uses. It did not.
It did not may be the three words that define 2025. Two more quotes from Shirky:
We cannot simply redesign our assignments to prevent lazy A.I. use. (We’ve tried.) If you ask students to use A.I. but critique what it spits out, they can generate the critique with A.I. If you give them A.I. tutors trained only to guide them, they can still use tools that just supply the answers.
A student who cuts and pastes a history paper is enrolled in a cutting and pasting class, not a history class. If the student’s preferred working methods reduce mental effort, we have to reintroduce that effort somehow.
I just love, “A student who cuts and pastes a history paper is enrolled in a cutting and pasting class, not a history class.”
Unfortunately, misinformation about, and outright manipulation in, academic integrity conversations are at all-time highs. For every person that does not care to understand the realities of things like AI detection, there’s one who’s willing to distort it, usually to shovel coal in the AI hype train.
That’s given us plenty of appalling quotes in 2025.
From Issue 338, in a report on a survey of college and university leaders about AI use, Lynn Pasquerella, President of the American Association of Colleges & Universities (AAC&U), which sponsored the survey, wrote:
The fact that 95% of the leaders surveyed are concerned about the impact of Generative AI on academic integrity, 92% worry about undermining deep learning, and 80% fear the exacerbation of existing inequities due to the digital divide points to the need for both democratizing opportunity by closing the skills gap and for building AI competencies.
Ninety-five percent of college leaders say they’re concerned about integrity and AI. To Pasquerella and AAC&U, this means we need more AI — democratizing opportunity and building AI competencies, whatever that means. Unbelievable.
In Issue 343 we covered an opinion piece by Bruce Fraser of Indian River State College and Sid Dobrin of the University of Florida. It’s riddled with errors, erroneous citations, and jaw-dropping quotes. Here is one of them:
Should our traditional definition of academic integrity — the idea of fair play in the context of learning — still hold sway in the era of GenAI?
They literally asked, in print, whether “the idea of fair play” should “still hold sway” in the AI era. This cannot conceivably be a serious point.
There’s also this strong contender, from Jeffrey C. Dixon, a Professor of Sociology at College of the Holy Cross (Massachusetts). In Issue 411, Dixon seems to think that students who use AI without permission in academic work are being punished for stealing their words from AI, not for seeking credit for work they did not do, mastery they do not have:
I find the practice of penalizing students for “stealing” words from large language models to write papers ethically difficult to reconcile with tech companies’ automated “scraping” of websites, such as Wikipedia and Reddit, without citation.
Also a strong nominee, from Issue 401:
Absolutely don’t do this
That’s Aravind Srinivas, CEO of AI company Perplexity, responding on social media to a video showing how anyone could use his company’s products — AI agents — to take a complete course on Coursera, literally without lifting a finger. As far as I know, that directive is the only action he or his company have taken to limit outright academic theft with their technology.
But these next three are my absolute favorites for 2025 Quote of the Year.
We start with John B. King Jr., “chancellor of the State University of New York system and the former education secretary,” who was quoted in the Wall St Journal, and covered by us in Issue 349. King, on a panel with an executive from OpenAI:
“There are probably lots of students, K-12 and higher ed, who used ChatGPT to do their homework last night without learning anything … That’s scary.”
That’s scary. Gee. As I mentioned at the time, if only King was in a position to do something about it, like if he was a Chancellor or something.
Then we get this one, talking about different AI policies class to class, from Issue 409:
“It can almost create a culture of paranoia for students who are living in constant fear of being called out for possible AI use, when they’re trying their best not to,” said Lauren Zentz, who chairs the [University of Houston] English department and reviews academic integrity cases. “It’s just a little bit of a minefield.”
I feel bad for those students trying their best not to use AI, or trying their best not to be “called out” for AI use — hard to tell which she means. Are we going to do nothing to soothe their suffering? If only we had consistent policies, that would solve it. Students won’t use AI, or be in “a culture of paranoia,” or “living in constant fear,” if our policies were better, said no one ever.
Well, until now.
Here’s a crazy idea; if students don’t want to be called out for AI use, maybe don’t use AI.
But the winner, hands down, for 2025 Quote of the Year is our friend Jenny Maxwell, the head of higher education at cheating provider Grammarly, which — let’s be honest — is a sales job, not an education job. Grammarly has the same relationship with education that fishhooks have with fish.
Anyway, from Issue 396, Maxwell gifts us with this quote, taking on the voice and view of a college student:
Oh my god, my instructor is not making this punitive, they’re not policing this. I’m so excited.
Yes indeed, college students probably are overjoyed to learn that a professor is not policing AI use, or integrity in general, I imagine. Grammarly is pretty excited by that too.
Worst Research
In reviewing our coverage in 2025, I was surprised to see we did not review any research that was bad. Not that there wasn’t any, we just didn’t review any. Which I chalk up to lack of time.
I am sad about it. Genuinely. Exploring, exposing, and sharing research on integrity was a key reason I started The Cheat Sheet.
The best I can do on bad research is to revisit research from 2023 which we covered in Issue 261. It’s the research that claimed to show that cheating had not increased with the arrival of AI. At the time, I said it was a joke. And I’m still shocked that the New York Times and other outlets fell for it, swallowing it whole and uncritically.
This year, some serious researchers with MIT and a top-flight podcast got their hands on that 2023 paper and absolutely eviscerated it. We covered the 2025 dismantling of the 2023 absurdness in Issue 337.
In addition to calling out the actually crazy methodology and conclusions in the study, the researchers on the podcast say, for example:
The most common reporting of this study was that concerns about AI were overblown. But even if you take it as completely representative, I think there findings within the study that do indicate there are reasons to be pretty seriously concerned about pretty high percentages of students – higher than in past years – that admit to cheating behaviors that are not in the gray area, not in the margins, but are actually like really serious concerns for the learning process.
No kidding.
And it’s nice to be on the record saying a study is complete bunk — in a complete counter to what major media said, by the way — and get some backup.
Best Research
The good news, in two ways, is that we have several examples of outstanding research this year — papers I was able to get to, for the most part.
I’ll start with a hat tip to the, I think, five studies out this year showing that AI detectors work and work well, thank you very much. See Issue 394 for one, with links to a few others.
Among the standouts is the new research we covered in Issue 335. It’s:
by Kenneth R. Deans Jr., Jami Jones, Jillian B. Harvey, and Daniel Brinton. Deans is with Health Sciences South Carolina, while the other authors are with Medical University of South Carolina, a public medical college.
The team registered as a “student” in an online Masters-level health administration course. The “student” was actually an AI bot. It not only earned high grades, but its output and participation in course activities went undetected.
From the paper:
Our study’s undetectable contributions of AI indicate that current educational and in-place detection methods need revising, which calls for proactive measures to address the emergent challenges posed by AI in educational contexts, such as developing more sophisticated AI-detection algorithms
Proactive measures to address emergent challenges. I’ll drink to that.
Another winner was also covered in Issue 336, where researchers at Georgia Tech watermarked answers generated by a required website to identify collusion cheating. Their test caught 28 students colluding, students who, in all likelihood, would not have been caught without the watermarking effort. The study also showed that 26 of the 28 attempted to hide their collusion.
The research team was Christopher Cui, Jui-Tse Hung, Pranav Sharma, Saurabh Chatterjee, and Thad Starner, all from Georgia Institute of Technology (Georgia Tech).
It’s a good study, showing that if you look for cheating, you will find it. But I have to share this, from the paper, again:
Without the watermarked question, it would have been virtually impossible to convict or even identify these students were collaborating. Notably, despite the watermark, this case was mistakenly dropped due to anti-plagiarism staff misunderstanding the nature of the watermarked answers. This result shows the need for presenting evidence in a manner that is easily understood and intuitive to an outside observer.
Epic.
Another winner in this category is the new study from Alexander K. Kofinas, Crystal Han-Huei Tsay, and David Pike. Kofinas and Pike are listed as affiliated with the Graduate School of Business, University of Bedfordshire while Tsay is affiliated with the University of Greenwich, in London. We covered it in Issue 352.
It showed — yet again — that human educators and graders are not good at spotting AI text, even when they’re told it’s there. Which means that, if a school is not using AI detection technology and human review, they’re just missing most of it.
But the paper also significantly features:
authentic assessments are neither a shield for academic integrity nor an immediate solution to the GenAI challenge.
Yup.
Outstanding as well is the research, covered in Issue 351, from Phil M. Newton (no relation) and Michael J. Draper. Both are from Swansea University, in Wales. Their work showed that, despite mountains of uncontroverted recent evidence showing that online, unsecured, unsupervised assessments are overwhelmed with validity-destroying cheating, many, many schools are still using them.
There are two winners for Best Academic Integrity Research of 2025.
One is this strong research out of Spain finding that the link between using ChatGPT and academic fraud, i.e., plagiarism, is significant, though not causal. It’s in Issue 361.
The paper is by Héctor Galindo-Domínguez, Lucía Campo, Nahia Delgado, and Martín Sainz de la Maza. All four are from University of the Basque Country.
But beyond finding that cheaters really like to use ChatGPT and frequent ChatGPT users are also more likely to cheat, the paper’s main contribution is in exploring why students cheat.
Despite the widely held belief that most cheating comes from student pressures such as deadlines, life and school demands, and other things, that’s probably not true. This paper found that the two top reasons students cheat are amotivation and cheating culture — that students don’t care, and that they see, or believe, other students are cheating.
Schools have struggled with motivation forever. It’s a tough nut. But the other — a cheating culture — schools can control entirely. Many, as you likely know, simply don’t want to.
The other winner is this paper reviewing 100 academic integrity policies of higher education institutions in the United States. The authors are Courtney Cullen and Greer Murphy of Georgia Tech [now at North Carolina State] and the University of California, Santa Cruz, respectively. We covered it in Issue 363.
It’s a monumental work and explores how policy language can impact a school’s integrity culture. For example:
public perception is an important part of campus culture and any institution’s ability to uphold and protect academic integrity. Integrity/conduct administrators can get a bad rap as being out to punish for punishment’s sake–and for that reason, we argue that it behooves academic integrity administrators to be even more deliberate and careful with the policy language that we use.
Worst News Coverage
Unfortunately, the bad news coverage of integrity was abundant in 2025. Most made predictable, repeated — but very significant — mistakes.
The frustrating and depressing thing about these errors is that they are so easy to not make. Asking someone would usually do it. These stories also get cited and repeated and shared. So, when news outlets get it wrong, it corrupts our conversation for a long time.
Among the most common errors this year was repeating research without having read it, or by extension, understanding it. Lumping all AI detectors together and declaring them either unreliable or with high error rates was another frequent, big miss.
In addition, several news outlets decided that misconduct allegations were incorrect — as in wrong — based on nothing more than a student saying they did not cheat. Or taking a dropped case or lack of penalty as evidence that the underlying incident — the cheating — did not happen.
Our first example, on cue, is from Issue 334, from The Guardian. The work is lazy and, as mentioned, decides that students were incorrectly accused of cheating, and that AI detection was wrong, based on student denials and nothing more. No actual evidence whatsoever. It’s horrible.
Another example is from Issue 402 in which the Australian Broadcasting Corporation (ABC) blew it. And how, declaring as their headline:
University wrongly accuses students of using artificial intelligence to cheat
Problem is, quite obviously, there is zero evidence that the accusations were wrong. No one in the story even says that. Not even the accused students. Not one time. Seriously. Go read the piece.
ABC sees that the school in question dropped the cases and decides on its own that this must mean the accusations were wrong. I don’t know what that is, but it ain’t journalism.
Then we get the story we covered in Issue 359, this one by K12 Dive. Here is the headline:
Teens are embracing AI — but largely not for cheating, survey finds
Kids — young teens — told a researcher in a focus group, in front of other kids they did not know, that they were not cheating with AI. That’s what happened. For K12 Dive that’s good enough to be fact.
But the absolute Worst News Coverage on integrity in 2025 was the disaster from CalMatters that we covered in Issue 374. As I mentioned in that Issue, the piece is so bad, it ought to be fully retracted.
Some context: In 2023, CalMatters ran a story on integrity and cheating that was so bad, I named it the worst piece of integrity-related journalism of the year (see Issue 192 and Issue NY 23/24). So, congratulations CalMatters — you’ve won the Worst News Coverage award twice. That’s impressive.
This year’s disaster is about AI detection, specifically about Turnitin. It’s clear very early in the article that the writer has no idea how AI detection works; she’s just mad at Turnitin. And I say that because she’s written a handful of anti-Turnitin articles over the years.
Here’s what the piece is about:
This investigation revealed institutions willing to renew Turnitin subscriptions year after year despite the cost, faulty technology and concerns about privacy and intellectual property raised by the company’s ever-expanding database of papers.
Schools are renewing their Turnitin contracts even though people have “concerns.” Stop the presses.
The piece quotes Jesse Stommel, who has opposed every piece of integrity and security technology, ever. Proctoring, plagiarism detection, you name it, he’s opposed to it. He’s been on panels with Course Hero, tells people cheating is not getting worse, and tells his students he refuses to check their work for cheating, preferring to trust them. He even said, at one point, that it was a:
fact that “when students cheat, it’s usually unintentional or non-malicious”
That is not a fact. That is made-up nonsense to justify not caring about integrity. He also says:
“Deterrence doesn’t actually work.”
Which goes against more than a century of psychological research.
Did CalMatters interview anyone to refute this crazy? Of course not.
CalMatters also quotes Sean Michael Morris, identifying him as a “long time educator,” but declining to mention that he was a VP at Course Hero. Morris is so opposed to assessment security and credential value that, in addition to working for one of the worst cheating companies, he sat on a panel with a guy who compared remote test proctoring with eugenics, selective breeding, sterilization, genocide, Nazism, white supremacy, sexism, ableism, cis/heteronormativity, and xenophobia. No, really.
CalMatters shares none of this. All that matters is that, according to the article, Morris:
tries to convince faculty members they don’t need Turnitin
Of course he does.
So many errors, so much unhinged bias, it’s an easy winner for the worst story of 2025. So bad, in fact, I cannot call it news.
Best News Coverage
This, I am calling a three-way tie for the Best Pieces of Journalism on Integrity in 2025. Please read them. Please re-read them.
I am heartened somewhat that while the worst pieces of writing this year came from the likes of K12 Dive and whatever CalMatters is, these three are from The New Yorker, New York Magazine, and the Wall St Journal — showing that good work can be done when you care enough to do it.
We start with the piece in The New Yorker magazine, covered in Issue 374. About it I wrote:
If you read one article on the state of higher education, the impact of generative AI, and cheating — read this one, in The New Yorker magazine.
My view has not changed. It’s the best view of what is really happening in our schools, with students. Top shelf work.
Another co-winner comes from Issue 364, an epic piece in New York Magazine. I called it:
a thing of beauty. I could have written it. I should have. But honestly, I could not have done much better.
Here’s the headline:
Everyone Is Cheating Their Way Through College
Boom. A win. And true.
Another winner comes from Issue 349, this major piece from the Wall St Journal.
The headline and subheader are:
There’s a Good Chance Your Kid Uses AI to Cheat
More students are hiding their secret weapon from parents and teachers, racking up top grades while AI chatbots do the work
These are excellent, all three. If I had actual awards to bestow, the writers of these stories would be getting them, along with their editors and publishers.
Person of the Year
Professor Stephen Larin at Queens University (CAN) deserves a mention here. He helped lead a new AI use policy at his school and, as covered in Issue 337, said:
“The majority of the students in the class didn’t cheat, but the scale of the problem and the disrespect and foolishness that it demonstrated genuinely disgusted me,” Larin said.
I feel your disgust, professor. Taking action counts.
But the 2025 Person of the Year for Academic Integrity is actually a group — the Credential Integrity Action Alliance (CIAA), and the people in it, and behind it.
I cannot tell you how impressive their accomplishments are. This year, they got the law changed in Georgia, making it illegal to buy or sell cheating services. And, I suspect, Georgia will not be the only state to take such action. I hope they’re successful in getting some enforcement too.
But that’s not the point.
CIAA is a group of leaders who saw the problem of rampant, industrial-scale cheating, came together, raised money, found a target and actually won. They did a thing to stop cheating, to slow down the pirates who profit from it. It should be celebrated.
As I wrote in Issue 337 about their accomplishment, they need help. If it’s not too late for a resolution for 2026, I can think of a pretty good one.



Thank you for taking the time to do this. I'm going to brew some coffee and re-read all of the winners to prep for the start of the semester. It's so much good information to have in my head. Happy new year!