New Survey: "Preventing Student Cheating" is Number One Faculty Concern
Plus, steal this AI testing guidance from the U.K.
Issue 221
To join the 3,351 smart people who subscribe to “The Cheat Sheet,” enter your e-mail address below. New Issues every Tuesday and Thursday. It’s free:
If you enjoy “The Cheat Sheet,” please consider joining the 13 amazing people who are chipping in a few bucks a month via Patreon. Or joining the 14 outstanding citizens who are now paid subscribers. Thank you!
“Preventing Student Cheating” Tops Faculty Survey of Challenges
The “Time for Class” study says it’s the largest and longest-running survey of issues related to digital teaching and learning. Released just a few days ago, the 2023 version has some pretty stunning findings related to academic integrity in general and AI text-creation specifically.
The survey is by Tyton Partners and includes the following partners in the research: Anthology, Turnitin, Bill & Melinda Gates Foundation, Every Learner Everywhere, Lumina Foundation and Macmillan Learning. The sample is robust - 2,048 students, 1,748 instructors, and 306 administrators.
Headlining the findings, at least for me, is that last year’s survey of faculty ranked “preventing student cheating” as the 10th most significant “instructional challenge” for teachers. It was behind issues such as providing timely feedback, cost of instruction materials, and providing enough practice for students. Just 15% of educators named cheating prevention as a challenge.
This year, preventing cheating has surged to the top instruction challenge for educators. The. Top. Challenge.
Not only did it jump nine places, cheating is now tabbed as a concern by 43% of faculty - up 28%, and nearly triple the number from last year. For comparison, the top challenge in 2022, “providing timely feedback for students,” was at 36%.
Tyton says the jump is due to the threat from generative AI. I think that’s right.
Based on other data from the survey, the 43% of faculty who said cheating prevention was a challenge, are right. But even they - along with those who did not say it was a challenge - are probably way underestimating the danger.
For faculty, the concerns are not based on ignorance. Faculty who have used generative AI tools are more likely to say they plan to use AI for “Detecting student usage of AI-generated content.” Thirty-seven percent of teachers who’ve used AI want help finding it in student work. Among faculty non-users, the rate falls to 28%. Those who know best are more likely to ask for help “detecting student usage.”
That’s not a good omen.
Also consider that the survey found:
51% of students will continue to use generative AI tools even if their instructors or institutions prohibit it. For the 27% of students that are currently using generative AI tools, that number jumps to 69%
That’s not a good sign either.
Perhaps even worse, the survey found that just 3% of institutions have “developed a formal policy regarding the use of AI tools.” Three.
I understand being inclusive and deliberate, but the train has left the station here. It’s arrived at its destination, the passengers and cargo have left. It’s on its way back. Take your time.
Speaking of institutional gaps in addressing these problems, the new survey also shows that when students “are struggling with a concept” they are more likely to use illicit cheating services than approved, legitimate resources from their school. Specifically:
39% of first-year students turn to “study aid providers” such as Chegg, Course Hero and Quizlet - they are listed by name in the survey. Only 28% say they use “tutoring or support services at my university.”
For non-first-year students, 29% go to Chegg and the others. Just 15% leverage university resources.
Stop me if you’ve heard this already. That’s not good.
Allowing this level of imbalance between unauthorized, even banned, services that sell cheating and real school support is a failure. Just no better word for it.
Overall, these findings ought to be activating alarms all over administration and board offices. Teachers clearly see a problem - preventing cheating is now their most-identified challenge. Those who understand and use AI are more likely to want help. A majority of students say that even if their teachers disallow using AI, they will use it anyway. Among students who already use AI, 69% say they will essentially ignore policies. Just 3% have policies in place anyway.
I don’t know what more there is to say, or if anyone is listening.
Need an AI Policy? Steal This One.
The UK Department for Education issued a report on AI recently.
As these go, it’s fine. It reminds, for example, that AI text generators “can be factually inaccurate” and advises that:
Schools and colleges may wish to review homework policies, to consider the approach to homework and other forms of unsupervised study as necessary to account for the availability of generative AI.
They used quite a few words to say that schools should be on guard for cheating in “unsupervised” work.
I also like that this report also says that, for teachers and institutions:
the quality and content of the final document remains the professional responsibility of the person who produces it and the organisation they belong to.
Solid.
But in a section on “formal assessments” the report says:
The Joint Council for Qualifications have published guidance for teachers and exam centres on protecting the integrity of qualifications in the context of generative AI use.
And that guidance is gold.
Seriously, if you’re in an institution of education and you give exams, or supervise them or you’re a professional testing body or accreditor - steal this policy. Cut. Paste. Approve.
First, the Joint Council on Qualifications says it is:
The Joint Council for Qualifications is a membership organisation comprising the eight largest providers of qualifications in the UK.
And I do really wish we had that in The States. But that’s not the point.
What the JCQ says in their “Guidance for Teachers & Assessors” is pitch perfect in my view.
I cannot share it all. But consider that it says, for example:
all work submitted for qualification assessments must be the students’ own
Students who misuse AI such that the work they submit for assessment is not their own will have committed malpractice, in accordance with JCQ regulations, and may attract severe sanctions
The bold is original.
It goes on:
Students must make sure that work submitted for assessment is demonstrably their own. If any sections of their work are reproduced directly from AI generated responses, those elements must be identified by the student and they must understand that this will not allow them to demonstrate that they have independently met the marking criteria and therefore will not be rewarded
Clear, direct. I love it. Note that it’s not a ban, it’s a requirement for citation and a policy that AI content cannot be marked as evidence of mastery or competency. You can use it, we just won’t count it.
I also love that it puts some of this burden on test-takers, that work be “demonstrably their own.” In this environment, I agree that this ought to be required.
But I double-love that, as the responsibility on students escalates, it really escalates for instructors or assessment supervisors:
Teachers and assessors must only accept work for assessment which they consider to be the students’ own [rule citations removed for clarity] and
Where teachers have doubts about the authenticity of student work submitted for assessment (for example, they suspect that parts of it have been generated by AI but this has not been acknowledged), they must investigate and take appropriate action
They must investigate and take appropriate action. Must.
It’s goes on:
While AI may become an established tool at the workplace in the future, for the purposes of demonstrating knowledge, understanding and skills for qualifications, it’s important for students’ progression that they do not rely on tools such as AI. Students should develop the knowledge, skills and understanding of the subjects they are studying
Seriously, who disagrees with this?
And:
Teachers and assessors must be assured that the work they accept for assessment and mark is authentically the student’s own work. They are required to confirm this during the assessment process.
Required to confirm. Required. They go on, speaking to teachers and test supervisors and accreditors:
Do not accept, without further investigation, work which staff suspect has been taken from AI tools without proper acknowledgement or is otherwise plagiarised – doing so encourages the spread of this practice and is likely to constitute staff malpractice which can attract sanctions.
I mean, I want to stand and cheer.
And:
The use of detection tools should form part of a holistic approach to considering the authenticity of students’ work; all available information should be considered when reviewing any malpractice concerns
Yes - don’t rely on “detection tools” alone. “All available information should be considered.” Yes. One thousand times, yes.
Use AI if you want, but cite it. No, it will not count. If you suspect, investigate. Use tools, don’t use them exclusively. Do not accept work that is fraudulent or inauthentic, it encourages cheating. It seems so simple. I do not understand why so many people want to make it so complicated.
But seriously. Steal this policy. Cite it. But steal it.