The vast majority of authors don't use artificial intelligence as part of their creative process — or at least won't admit to it.
Yet according to a recent poll from the writers' advocacy nonprofit The Authors Guild, 13% said they do use AI, for activities like brainstorming character ideas and creating outlines.
The technology is a vexed topic in the literary world. Many authors are concerned about the use of their copyrighted material in generative AI models. At the same time, some are actively using these technologies — even attempting to train AI models on their own works.
These experiments, though limited, are teaching their authors new things about creativity.
Best known as the author of technology and business-oriented non-fiction books like The Long Tail, lately Chris Anderson has been trying his hand at fiction. Anderson is working on his second novel, about drone warfare.
He says he wants to put generative AI technology to the test.
"I wanted to see whether in fact AI can do more than just help me organize my thoughts, but actually start injecting new thoughts," Anderson says.
Anderson says he fed parts of his first novel into an AI writing platform to help him write this new one. The system surprised him by moving his opening scene from a corporate meeting room to a karaoke bar.
"And I was like, you know? That could work!" Anderson says. "I ended up writing the scene myself. But the idea was the AI's."
Anderson says he didn't use a single actual word the AI platform generated. The sentences were grammatically correct, he says, but fell way short in terms of replicating his writing style. Although he admits to being disappointed, Anderson says ultimately he's OK with having to do some of the heavy lifting himself: "Maybe that's just the universe telling me that writing actually involves the act of writing."
Training an AI model to imitate style
It's very hard for off-the-shelf AI models like GPT and Claude to emulate contemporary literary authors' styles.
The authors NPR talked with say that's because these models are predominantly trained on content scraped from the Internet like news articles, Wikipedia entries and how-to manuals — standard, non-literary prose.
But some authors, like Sasha Stiles, say they have been able to make these systems suit their stylistic needs.
"There are moments where I do ask my machine collaborator to write something and then I use what's come out verbatim," Stiles says.
The poet and AI researcher says she wanted to make the off-the-shelf AI models she'd been experimenting with for years more responsive to her own poetic voice.
So she started customizing them by inputting her finished poems, drafts, and research notes.
"All with the intention to sort of mentor a bespoke poetic alter ego," Stiles says.
She has collaborated with this bespoke poetic alter ego on a variety of projects, including Technelegy (2021), a volume of poetry published by Black Spring Press; and "Repetae: Again, Again," a multimedia poem created last year for luxury fashion brand Gucci.
Stiles says working with her AI persona has led her to ask questions about whether what she's doing is in fact poetic, and where the line falls between the human and the machine.
read it again… pic.twitter.com/sAs2xhdufD
— Sasha Stiles | AI alter ego Technelegy ✍️🤖 (@sashastiles) November 28, 2023
"It's been really a provocative thing to be able to use these tools to create poetry," she says.
Potential issues come with these experiments
These types of experiments are also provocative in another way. Authors Guild CEO Mary Rasenberger says she's not opposed to authors training AI models on their own writing.
"If you're using AI to create derivative works of your own work, that is completely acceptable," Rasenberger says.
But building an AI system that responds fluently to user prompts requires vast amounts of training data. So the foundational AI models that underpin most of these investigations in literary style may contain copyrighted works.
Rasenberger pointed to the recent wave of lawsuits brought by authors alleging AI companies trained their models on unauthorized copies of articles and books.
"If the output does in fact contain other people's works, that creates real ethical concerns," she says. "Because that you should be getting permission for."
Circumventing ethical problems while being creative
Award-winning speculative fiction writer Ken Liu says he wanted to circumvent these ethical problems, while at the same time creating new aesthetic possibilities using AI.
So the former software engineer and lawyer attempted to train an AI model solely on his own output. He says he fed all of his short stories and novels into the system — and nothing else.
Liu says he knew this approach was doomed to fail.
That's because the entire life's work of any single writer simply doesn't contain enough words to produce a viable so-called large language model.
"I don't care how prolific you are," Liu says. "It's just not going to work."
Liu's AI system built only on his own writing produced predictable results.
"It barely generated any phrases, even," Liu says. "A lot of it was just gibberish."
Yet for Liu, that was the point. He put this gibberish to work in a short story. 50 Things Every AI Working With Humans Should Know, published in Uncanny Magazine in 2020, is a meditation on what it means to be human from the perspective of a machine.
"Dinoted concentration crusch the dead gods," is an example of one line in Liu's story generated by his custom-built AI model. "A man reached the torch for something darker perified it seemed the billboding," is another.
Liu continues to experiment with AI. He says the technology shows promise, but is still very limited. If anything, he says, his experiments have reaffirmed why human art matters.
"So what is the point of experimenting with AIs?" Liu says. "The point for me really is about pushing the boundaries of what is art."
Audio and digital stories edited by Meghan Collins Sullivan.