Last year, Rachel Antell was working on her latest documentary. An archival producer, Antell’s credits include the Academy Award-nominated Crip Camp and the CNN Film Chowchilla. She was tasked with sourcing archival photos, this time for a sweeping historical survey. It was just months after the release of ChatGPT, and Antell joked with her producing partner about the day generative AI would do their jobs.
“Pretty much the next day, the directors came to us with images that they had created using generative AI that looked just like archival images,” Antell told me. The generations portrayed an era that had sparse photographic archives — some even depicted times before the popularization of the camera.
“So you’re going to be interspersing this with actual archival [images]? What’s the plan for transparency?” she remembers asking the directors. “They were really like: there is none, this is the Wild West.”
It was a wake-up call for Antell — generative AI was already entering the documentary world. Soon it could be entangled with the archives she relies on for her work.
Within months Antell, her producing partner Jennifer Petrucelli, and documentary producer Stephanie Jenkins, founded the Archival Producers Alliance (APA) to spotlight the potential harms of these tools in documentary filmmaking. By the fall, they’d recruited over 100 other documentarians and published an open letter in The Hollywood Reporter demanding more guardrails in the industry. The letter alleged their members had been pressured to use AI-generated newspaper clippings and fake “historical” images to save time and money for productions.
Today, after nearly a year of research, workshops, and an endorsement campaign, the APA launched its new generative AI guidelines at the Camden International Film Festival. The document lays out ethical considerations for archival producers who use generative AI, but also for other filmmakers, studios, broadcasters, and streamers. It also digs into several specific recommendations for filmmakers, like content labels and asset tracking.
“We’ve chosen to make it quite granular and talk about things like cue sheets, while everyone is saying generative AI is going to burn everything down, and none of us will have jobs,” said Jenkins, who has worked as an archival producer for PBS docuseries and The New York Times’ Op-Docs. “We’re allergic to that sort of sensationalism.”
Rather than resisting generative technologies outright, the APA is pushing for more education, intention and conversation about the ethics of AI usage. “We are in the anti-misinformation business. We want to preserve an art form,” Jenkins said. “We’re not trying to do that by taking anyone down, but by giving people tools in a very confusing world.”
At launch, dozens of prominent documentary film organizations have already endorsed the guidelines, including the Documentary Producers Alliance (DPA), the Directors Guild of Canada, and the International Documentary Association (IDA). Over 50 individual filmmakers have also co-signed the guidelines, including Ken Burns, Rory Kennedy, and Michael Moore. The APA also announced it’s received grant funding from the Jonathan Logan Family Foundation to continue its work over the next year.
“The cornerstone of the guidelines is transparency. Audiences should understand what they are seeing and hearing — whether it’s authentic media or AI generated,” said Petrucelli, co-director of the APA.
Over the past year, print and digital journalism outlets have seen a wave of controversy and public debate for publishing AI-generated text, from the Sports Illustrated and CNET debacles, to more isolated local news scandals. In turn, there’s been a demand for these publications to declare their AI editorial policies upfront, many of which have since published usage guidelines.
For now, it seems a similar public reckoning has not come for the broadcast journalism world, at least not enough to pressure most major networks to make public commitments about AI usage or to make existing internal guardrails legible to their audience. If the experience of APA members is any indication, though, broadcast journalism is far from immune to the same ethical pitfalls.
The debate over using generative AI clearly echoes past controversies in documentary ethics. Reenactment, for one, has long been criticized by documentary purists. Most notably, the use of reenactment was used to bar The Thin Blue Line from the 1988 Academy Awards race. (Earlier this summer I sat down with Errol Morris, the director of that film and pioneer of reenactment, to discuss some of these parallels.)
While the APA puts reenactment and AI-generation on a continuum, their guidelines draw several important distinctions.
“There’s humans filming reenactments, there’s humans doing the costumes, there’s humans placing the fruit bowl on the table for the still life. There’s authorship and there’s accountability to people,” said Jenkins.Antell, meanwhile, points to the cost barriers for reenactment. “If you’re putting thousands of dollars for actors and cinematographers and all these things to create a reenactment, you’re going to put a lot of care and intentionality and research into doing it right. The problem we see with generative AI is that things are free and fast,” she said. “You can just shoot it off because why not? It didn’t cost you anything.”
The APA’s guidelines encourage filmmakers to be more deliberate and seriously weigh the ethics of using AI-generated content despite it being cheap. There’s particular emphasis on using AI to imitate or show the likeness of a real person. Jenkins points out there’s a growing temptation to have a historic character become the narrator of a film or series, and speculates it may become a trend in docs given the swift rise in the quality of AI-generated audio.
“Thinking through the cultural sensitivities of that, the ethics of that, the relationship you might have with that estate, can really complicate things,” she said, noting that things like word emphasis and the emotion in an AI-generated narration are algorithmically decided and will never be accurate or authentic to the real person.
Alongside the guidelines, the APA also plans to release a toolkit for filmmakers, including cue sheet templates to track each instance where generative AI was used in a documentary.
These trackers include fields for the prompt put into a model, the date something was generated, the software version to track terms and conditions, any reference materials, and the copyright status of those reference materials. With laws and industry norms in flux, the APA says failing to track this information could make filmmakers vulnerable, or get projects killed or stuck in legal review limbo down the line.
“We have an archival cue sheet, we have a music cue sheet. Those are very much in people’s consciousness. All of a sudden, we have these gen AI elements, and we’re acting as though they have no provenance,” said Antell. “We’re saying they do and we need to understand what they are as best we can.”
To date, the APA has sent its guidelines to several major streamers in the U.S., including Netflix, Amazon, Apple, National Geographic, and Hulu. Over the next year the group plans to use its newfound funding to host workshops across the film festival circuit, and continue meeting with studios and industry groups.
Overall, the APA co-directors said the past year has made them keenly aware of how “brittle” the contract can be between documentary filmmakers and audiences. “You can lose their trust really fast, and what does that do for the rest of the documentary field when you do,” said Jenkins.
Bracketing the world of independent documentary film and Hollywood, however, there’s an open question for the group on how news broadcasters might fit into these conversations. Are there lessons in these guidelines that extend to the work of a broadcast news program? Or can they be folded into the editorial standards of TV networks, particularly those that air shortform and longform documentaries?
National news broadcasters haven’t been the primary focus of the APA to date, but among networks the group says its most fruitful conversations so far have been with PBS.
That should be no surprise. PBS, by several measures, has one of the most comprehensive and transparent standards for using AI-generated content on TV in the U.S. right now. In a memo released by PBS standards back in September 2023, the broadcaster adopted many of the same recommendations now being spotlighted by the APA.
One stand out is an explicit call to mark all AI-generated content. “PBS strives to empower the audience to evaluate the credibility of content and determine for themselves whether it is trustworthy,” reads the memo. PBS names specific types of disclosures in its toolkit, including lower-third labeling, top-of-show or closing credit disclaimers, and supplemental online material to explain how generative AI tools were used.
On the questions of “re-creations or simulations of actual events,” the memo pulls on an existing 12-page standards document, which discourages over-dramatization and requires re-creations to be clearly identified to PBS audiences.While PBS itself has not endorsed the APA guidelines, the team behind “POV” has signed on. The longest running television showcase for independent documentaries in the U.S., “POV” has aired on PBS since 1988. Ken Burns, one of the broadcaster’s most prolific documentarians, also added his name to the list. His docuseries, including “The Civil War,” “Jazz,” and “The National Parks: America’s Best Idea,” each rely heavily on vetted archival documents, images and audio.
While PBS is largely upfront about its policy, the same can’t be said for most network news broadcasters right now. I was hard-pressed to find the same level of granularity and transparency at any other network.
The APA guidelines were endorsed by NBC Universal Academy, a journalism education arm of the media company that has been regularly publishing stories on AI ethics, but not by NBC News, or its competitors. I was not able to find any publicly listed AI editorial policy for NBC News, CBS News, ABC News, or Fox News. NBC News did not answer questions about its current editorial policy in time for publication.
In a statement to Nieman Lab, Claudia Milne, senior vice president for standards and practices at CBS News and Stations, offered some clarity on the network’s policy. “CBS news has only used AI content in the context of stories about AI. We always label content that is AI-generated,” she said. Milne added that currently CBS News, which include anchored programs like “CBS Evening News”, as well as newsmagazine shows like “60 Minutes” and “48 Hours,” does not allow the use of AI to recreate historical events or replace archival footage.
As the APA notes, some documentary features have used deepfake technology — mainly AI-generated face swaps — in a thoughtful, and ethical way. Examples have included masking the identity of victims of revenge porn (Another Body) or endangered LGBTQ+ activists (Welcome to Chechnya). There may be a similar use case on network news for investigative reports that rely on anonymized sourcing. But currently CBS News does not allow the use of AI to disguise or conceal a location or sources, according to Milne. “There are plenty of other editing tools that we would use to do so. And if we disguise a background or change it we would be transparent with the audience about having done so and why,” she said.
CNN meanwhile has historically shrouded its editorial standards in secrecy, in part to shield itself from libel litigation. When it comes to AI, however, it has published an “AI principles” section on CNN Worldwide’s About page. The principles gesture towards “transparency” and “human oversight and guardrails,” but stop short of the PBS memo’s recommendations or CBS News’ more hardline policies.
A spokesperson for CNN International and CNN Films, which has distributed dozens of feature documentaries, including the Academy Award-winning Navalny, did not respond to questions about how these principles touch down in documentary productions or CNN’s newsroom.
Looking ahead, the APA says it hopes to bridge the schism that exists at times between the documentary and journalism worlds, with broadcast news being one prong of that campaign.
“We have a unique perspective on why it’s important to communicate with each other around emerging technologies,” said Jenkins, noting that the APA has taken meetings with organizations like the Shorenstein Center at Harvard Kennedy School and the Poynter Institute, which have been doing their own work on building guidelines for ethical generative AI adoption. “We’re building these things in parallel, but there doesn’t seem to be much discussion. A lot of [our work] is networking and making each other aware of our collective efforts.”