Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tech&Learning
Tech&Learning
Technology
Erik Ofgang

How Often Are Students Submitting AI Papers? Frequently, Says New Data

A cartoon of a robot sitting at a desk writing against a blue backdrop.

Turnitin released its AI detection tool in April 2023 and since then it has reviewed more than 200 million student papers and found that 10.3% included at least 20% AI-generated content. In addition, 3% — more than 2 million papers or other written materials — consisted of at least 80% AI-generated content. 

A separate Turnitin-sponsored survey about use of AI in college among faculty and students also found that between spring 2023 and fall 2023 the number of students who said they used AI at least once a month rose 22%, going from 27% to 49% of respondents. 

These findings mesh with other recent data points on AI use, including a recent survey of college students from Intelligent.com that found that 37% of students used AI, and 29% percent of these students use it to generate entire papers. 

Anecdotally, I’ve noticed a steady trend upward in the number of AI-generated papers I am seeing in the introductory English courses I teach. 

I recently spoke with Patti West-Smith, Turnitin’s Senior Director of Customer Engagement and Customer Experience and a former principal. She discussed what this recent AI cheating data from Turnitin means and what we as teachers can do to protect academic integrity, and more importantly, the student learning that occurs through writing. 

Are 1 in 10 Students Really Using AI To Cheat?  

Not exactly. Though the Turnitin data found that roughly 1 in 10 papers submitted contained at least 20% AI-generated content, West-Smith isn’t particularly concerned about those papers because that level of AI writing in a paper might involve legitimate use. 

“Students who are struggling with language might be looking for a little help or have used it for research, and potentially didn't know that they should cite that depending on the instructor and the institution and their requirements,” she says. 

What About The 3% of Papers That Were More Than 80% AI-Generated? 

These 2 million-plus papers are more concerning. “That indicates that the AI is being substituted for the student's own thinking,” West-Smith says. 

This is a problem for several reasons. “You don't want a student to get credit for work that they didn't complete, and from an assessment perspective, that's a really big deal,” she says. 

But more important than academic integrity is how the student is shortchanging their own learning process, West-Smith says. “Writing is a tool for thinking. It's the way that the brain makes sense of information. And if you are outsourcing that to AI on a regular basis on a big scale, like 80% of the writing, then what that indicates to me is that as a student, you're completely disengaged from that learning process, you have essentially outsourced it to a contractor.” 

Has AI Cheating Replaced Other Forms of Cheating?  

I’ve written about how the prevalence of AI-generated papers in my classroom has cost me a lot of time. However, on the bright side, I've started seeing less instances of traditional plagiarism. Unfortunately, this appears to be a fluke. 

“We theorized that potentially we would see this dramatic drop off of more classic instances of plagiarism,” West-Smith says. Turnitin data has not revealed that so far. “We are seeing just as much text similarity that comes in. I think one of the reasons for that is, in some cases, text similarity is not intentional plagiarism. You get a lot of skill deficit that leads to that. Students who don't know how to properly paraphrase. Students who don’t understand citation.” 

What Can I Do To Prevent AI Writing in My Class?  

This is one of the pressing questions in education. For her part, here are West-Smith’s suggestions: 

  • Institutions should have clear guidance around AI use that is communicated to teachers and from them to students.
  • Institutions should also have clear guidance around whether AI detection tools are being used, which ones and what educators can do with the information these tools provide. Because of false-positives, educators should use AI detection readings as just one data point in assessing whether a student used AI.
  • Educators should educate themselves about AI. Learn the tool’s strengths, weaknesses, etc.
  • Specific class policies around AI should be communicated as many instructors have AI use cases they are okay with and ones they don't allow. These policies sometimes vary from class to class.

Ultimately, West-Smith says the communication component is critical and all too easy to overlook. 

“A mistake we sometimes make as instructors is that we make these assumptions that students have the same sort of value systems that we do, and they will just implicitly know what is right or wrong from our perspective. And it's been my experience that that's almost always not true,” she says. “The moment that you assume students believe the same thing you do. You already are at a level of miscommunication.” 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.