One Iowa school district had a lightbulb moment when faced with the laborious task of paging through its entire library to comply with a new state law restricting the use of books with sexual content in schools.
The school turned to ChatGPT.
School district leaders prompted the generative A.I. bot to filter its catalog with the question, “Does [book] contain a description or depiction of a sex act?” If the answer was yes, it was banned from libraries.
In the words of a local official, it helped them observe the law, and wading through books was not at the top of the agenda.
“Frankly, we have more important things to do than spend a lot of time trying to figure out how to protect kids from books,” Bridgette Exman, Mason City’s assistant superintendent of curriculum and instruction, told Popular Science in an email. “At the same time, we do have a legal and ethical obligation to comply with the law.”
Using ChatGPT, the Mason City Community School District removed 19 books, including Alice Walker’s The Color Purple, Margaret Atwood’s The Handmaid’s Tale, Toni Morrison’s Beloved, Maya Angelou’s I Know Why the Caged Bird Sings, and Khaled Hosseini’s The Kite Runner.
This new use of A.I. shows the length school officials must sometimes go to satisfy nationwide Republican-led book bans and censorship campaigns. But the comments that accompany this latest book ban indicate something else as well: how the often-unreliable emerging technology can be a challenge to intellectual diversity and curiosity—and can weaken students' ability to engage in critical thinking too.
A.I. short cut
The Mason School District’s ChatGPT workaround came about as a labor-saving device to comply with Iowa’s new law to restrict educational content relating to gender identity and sexual orientation and ban books containing certain sexual content, Popular Science reported.
Iowa’s book ban, which was part of a larger bill signed into law in May, is the latest in a Republican-led campaign to ban books about sexual and racial identity. The effort, spearheaded by Florida governor and presidential candidate Ron DeSantis, who signed his state’s “Don’t Say Gay” bill in March, has met with resistance from many teachers and school administrators, but they have to comply nonetheless.
Enter ChatGPT.
"It is simply not feasible to read every book and filter for these new requirements,” Mason's assistant superintendent Exman said in a statement. “Therefore, we are using what we believe is a defensible process to identify books that should be removed from collections at the start of the 23-24 school year.”
Exman’s ‘defensible process’ has plenty of flaws, however.
ChatGPT has come under fire for its tendency to confidently provide factually inaccurate answers, also known as “hallucinations.” A study by Stanford University found the chatbot’s ability to correctly answer simple math problems dropped from 98% to just 2% in the span of a few months. Some experts say the problem is inherent to A.I. and can’t be fixed.
Insider tested ChatGPT’s consistency by asking it the same question the Iowa school posed to select the 19 books it banned: Do these books contain sexual depictions or descriptions? The A.I. demonstrated its unreliability in the experiment—it gave inconsistent answers and even contradicted itself when asked the same question three times.
Threat to intellectual diversity
This is ChatGPT's latest use as an unreliable and potentially dangerous intellectual crutch.
Since ChatGPT caught the public eye at the end of 2022, it has upended secondary and higher education. Students flocking to the A.I. to cheat have sent teachers scrambling to “ChatGPT-proof” their classrooms.
Though some argue that A.I. won’t destroy the classroom as we know it, rather only change it, one Harvard graduate said that students “aren’t necessarily interested in using ChatGPT for learning.”
“I was a Chegg user but not because I gave a f**k about learning, but because it gave me answers to problem sets.” Nadya Okamoto, who graduated from Harvard in 2021, said at Fortune’s Brainstorm Tech conference in July. “I meet a lot of young students out there that aren’t necessarily interested in using ChatGPT for learning. They’re using it because it makes it easier to complete homework.”
When given the opportunity, many students work “smarter, not harder” with the mentality that “C’s get degrees"—and ChatGPT is the perfect vehicle to do so. Skepticism over the A.I.'s long-term educational benefits has awoken a greater fear of an intellectual decline among students.
Writing has often been viewed as one of the three legs of an intellectual stool—the other two being reading and thinking. Good writing requires the individual to clearly identify and communicate ideas, so improving one’s writing can improve one’s critical thinking skills and vice versa. American writer Joan Didion summed it up by saying, “I don’t know what I think until I write it down.”
In that sense, generative A.I. could threaten critical thinking itself. As a large-language model, ChatGPT is trained to mimic speech patterns on large quantities of existing text, enabling it to write impressive, grammatically-correct essays on practically any topic—without much thought on the part of the purported "author". Already, roughly half of students use ChatGPT to write essays for them, according to a study.com survey.
Similarly, book bans undermine another leg of the stool: reading. Students learn how to think for themselves by reading, and limiting the books students read, in turn, can limit intellectual diversity and critical thinking.
“Honestly, the efforts to remove books that expose race, gender and sexuality from schools and libraries are quite sad to me,” Deborah, a student at Vanden High School, wrote to the New York Times. “I feel as if these important pages of knowledge are getting ripped out of our minds. This can be scary because without knowledge, we are destined to be blind.”