Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Entertainment
Adrian Horton

Documentary producers release new ethical AI guidelines for film-makers

A parent and a child sit next to each other and show each other their phone screens
A scene from Welcome to Chechnya, a documentary that used AI to disguise persecuted LGBTQ+ subjects in Russia. Photograph: Courtesy of HBO

Over a year after the dual Hollywood strikes put a spotlight on the industry’s adoption of AI, film-makers have often found themselves at a crossroads – how to use generative AI ethically, if at all? Where to draw the line on synthetic material? Documentary film-makers, in particular, have faced mounting concerns over “fake archival” materials such as AI-generated voices, photos or video.

As Hollywood continues to adopt artificial intelligence in production, a group of documentary producers have published a groundbreaking set of ethical guidelines to help producers, film-makers, studios, broadcasters and streamers address questions over use of the technology.

The Archival Producers Alliance (APA), a volunteer group of more than 300 documentary producers and researchers formed in response to concerns over the use of generative AI in nonfiction film, developed the guidelines over the course of a year, after publishing an open letter in the Hollywood Reporter demanding more guardrails for the industry. The guidelines, announced at the Camden Film Festival, are not intended to dismiss the possibilities of a technology that is already shaping all forms of visual storytelling, but to “to reaffirm the journalistic values that the documentary community has long held”.

“In a world where it is becoming difficult to distinguish between a real photograph and a generated one, we believe it’s absolutely pivotal to understand the ways generative AI could impact nonfiction storytelling,” said Stephanie Jenkins, APA’s co-director, in a statement.

Dozens of prominent documentary film organizations endorsed the guidelines at launch, including the Documentary Producers Alliance (DPA) and the International Documentary Association (IDA), as well as over 50 individual film-makers such as Michael Moore, Ken Burns and Rory Kennedy.

“Documentary is a truth-seeking art practice, but the nature of truth has always been mutable,” Dominic Willsdon, executive director of the IDA, said. “GenAI will bring all sorts of new and profound mutations, some fruitful, some harmful.” APA’s guidelines “can help the documentary field navigate this first phase of wider AI adoption”.

Rather than rejecting the use of generative AI outright, the group encourages consideration based in four overarching principles: the value of primary sources, transparency, legal considerations and ethical considerations of creating human simulations.

Documentary film-makers, according to the guidelines, should think about how synthetic material could muddy the historical record; consider the algorithmic biases encoded in synthetic material; preserve the original form or medium of a source and alert the audience if something has been changed, using text or visual cues; and treat image generation with the same intentionality, care for accuracy and sensitivity as they would traditional recreation or re-enactment.

“While there are great creative possibilities for this technology, without consideration of its potential risks, artificial content entering documentaries could permanently erode the trust between film-maker and audience, and muddy the historical record,” said Rachel Antell, APA co-director whose credits include the Oscar-nominated film Crip Camp. The guidelines follow a number of controversies about AI in documentary, such as an deepfake of Anthony Bourdain’s voice in Roadrunner and allegations of AI-generated “archival” photos in the Netflix documentary What Jennifer Did.

The guidelines stress transparency internally – with production teams, legal counsel, insurance companies, distributors, streamers and subjects – as well as with audiences. “The cornerstone of the guidelines is transparency. Audiences should understand what they are seeing and hearing – whether it’s authentic media or AI generated,” said the APA co-director Jennifer Petrucelli.

For further transparency, the APA suggests including GenAI tools, creators and companies in the credits, similar to how archival footage and music are recognized. And the guidelines specifically address the use of human simulations – commonly known as “deepfakes” – in nonfiction film, a hot-button topic given the technology’s use for misinformation online.

The group is “excited by the possibilities that emerging technologies will bring – especially for stories that have been overlooked, purposefully suppressed or not recorded in any fashion”. AI-generated human simulations, they noted, could help protect the identity of documentary subjects whose participation puts them at risk, as in David France’s 2020 film Welcome to Chechnya, which used AI to disguise persecuted LGBTQ+ people in Russia, or in Another Body, which used an AI veil to hide a victim of deepfake revenge porn.

“Far from being diminished by the challenges posed by GenAI, there is great potential to enhance documentaries of all kinds by responsibly harnessing this new technology, the guidelines note. “That said, we reaffirm the value of human labor and discernment in the production process.”

The hope is that with the introduction and adoption of these standards, documentary film-making “will continue to be an engaging, reliable, and most of all, trusted form of audio-visual storytelling that records human history and expresses human experience.

“The possibilities of GenAI are limitless – but there are some burdens only film-makers can carry.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.