Nieman Foundation at Harvard
HOME
          
LATEST STORY
A year in, The Guardian’s European edition contributes 15% of the publisher’s pageviews
ABOUT                    SUBSCRIBE
July 11, 2023, 12:24 p.m.

Writing guidelines for the role of AI in your newsroom? Here are some, er, guidelines for that

What’s okay and what’s verboten when it comes to AI in the production of news? Here’s how 21 newsrooms in the U.S., Europe, and elsewhere have laid out their own policies and plans.

The emergence of generative AI has highlighted the need for newsroom guidelines for these technologies. In this post, we’ll delve into a sample of newsroom guidelines that have already been shared. In the first part, we’ll describe some of the more and less prominent themes and patterns we see. In the second part, and based on the analysis, we’ll suggest some guidelines for crafting guidelines as a news organization. Whether you’re a curious journalist or a newsroom leader, we hope that this “guideline for guidelines” document can function as an overview of potential guardrails for generative AI in newsrooms.

The selection of guidelines we analyzed covers a range of larger and some smaller organizations mostly in Europe and the U.S. with a few from other parts of the world.1 The current sample of 21 can be found here2; please be in touch via email if your organization has published guidelines on generative AI that we should add to our list. We’ll regularly update the list of guidelines, and our analysis here, as it grows.

The guidelines we analyzed vary in specificity, and are sometimes named differently as “editor’s note,” “protocol,” “principles” or even “deontological charter.” The tone of some of the guidelines are restrictive where specific uses are banned. Other documents are more examples of governance, where news organizations are committing to specific responsibilities to make AI less risky. Below, we will discuss some of these overarching patterns with examples from specific guidelines.

Observations from published guidelines

Oversight

Guidelines mention oversight and link it deliberately to the importance of meaningful human involvement and supervision in the use of AI, including through additional editing and factchecking of outputs before publication. News organizations also reject the idea of replacing journalists with machines and highlight the importance of the decision-making role of humans when using generative AI tools.

Aftonbladet and VG, two news outlets from Sweden and Norway, are both owned by the news corporation Schibsted and you can see some clear similarities in the guidelines. Aftonbladet’s guidelines stated that “all material published has been reviewed by a human and falls under our publishing authority,” whereas VG’s state: “All use of generative AI must be manually approved before publication.”

Reuters describes oversight as “striving for meaningful human involvement, and to develop and deploy AI products and use data in a manner that treats people fairly.” Similarly, The Guardian says that the use of generative AI requires human oversight, stating in their guidelines that it needs to be linked to a “specific benefit and the explicit permission of a senior editor.” ANP, The Dutch News Agency, has similar statements in their guidelines on human oversight, stating they can use AI or similar systems to “support final editing, provided that a human is doing a final check afterwards.” They describe this process as Human>Machine>Human, where the agency and decision-making is overseen by a human.

CBC, an important news corporation in Canada, states that “no CBC journalism will be published or broadcast without direct human involvement and oversight.” De Volkskrant, a quality newspaper in The Netherlands, states that their content is created by human editors, reporters, photographers and illustrators and content generated by AI may not be used under the name of de Volkskrant without a human’s approval. The above guidelines by de Volkskrant are very much in line with what Heidi.News, a Swiss news organization, and Le Parisien, a French newspaper are stating. In general terms, they state that no content will be published without prior human supervision.

Nucleo, a digital native news outlet in Brazil stated that they will never “publish AI content without human review in stories and notes on the site” nor will they “use AIs as the final editor or producer of a publication.” Ringier, a media group from Switzerland with news brands across 19 countries, says that “results generated by AI tools are always to be critically scrutinized and the information is to be verified, checked and supplemented using the company’s own judgment and expertise.” Similarly, DPA, the German Press Agency, describes that the final decision on the use of AI is made by a human. They add: “We respect human autonomy and the primacy of human decisions.”

The German Journalists’ Association (DJV) refers to AI tools as colleagues, they state: “Under no circumstances should ‘colleague AI’ be allowed to replace editors. Even if the use of artificial intelligence changes tasks or eliminates them altogether, it does not make people in the newsroom superfluous.” The Financial Times says that their journalism will continue to be reported, written and edited by humans who are the best in their fields.” STT, a Finish press agency, links oversight to decision-making: “Don’t let the AI decide for you, but always assess the usefulness of the answers yourself.”

Insider, an American online media company, states that their journalists should always verify their facts. They add specifically: “Do not plagiarize! Always verify originality. Best company practices for doing so are likely to evolve, but for now, at a minimum, make sure you are running any passages received from ChatGPT through Google search and Grammarly’s plagiarism search.” They also call on editors to challenge reporters to verify facts. “It is necessary to step up your vigilance and take the time to ask how every fact in every story is known to your colleague.”

Transparency

Mentions of transparency are often interconnected with the requirement that content should be labeled in a way that is understandable for audiences. However, it is often far from clear from the guidelines how these mentions of transparency will take shape in practice.

Aftonbladet and VG state that in the rare cases where they publish AI-generated material, both text and images, it will be clearly labeled that it was generated by AI. The guidelines of VG specifically add: “AI-generated content must be labeled clearly, and in a way that is understandable.” In the same light, Reuters states that they will implement AI-practices “to make the use of data and AI in our products and services understandable.”

The Guardian says that when they use generative AI, they will be open “with their readers when they do this.” CBC talks about “no surprises” for their audiences, stating that they will label AI-generated content: “We will not use or present AI-generated content to audiences without full disclosure.” Similar statements can be found in the guidelines of the Dutch Press Agency (ANP), Mediahuis, the Belgian Media Council (RVDJ), the German Press Agency (DPA), and the German Journalists’ Association (DJV), that AI generated content needs to be clearly labeled.

Ringier, a media group in Switzerland, describes that their general rule is that content that is generated by AI needs to be labeled. Interestingly, this requirement is not necessary “in cases where an AI tool is used only as an aid” suggesting a different approach towards transparency when AI is used by people as an augmenting part of their workflows. Similarly, STT says that the use of AI in news reporting must always be communicated to the public: “This applies both to situations where technology has been used to help produce the news and to news where the source material has been created by machine intelligence.”

Banned vs. allowed uses

Banned and allowed uses of generative AI are often listed as ways to guide practices, though are sometimes also accompanied by exceptions. For these exceptions, conditions for transparency, responsibility, and accountability are made explicit in the guidelines. There is also a fair amount of attention given to the use-case of image generation.

Wired states that they do not publish stories with text generated by AI, except when the AI-generated character is the whole point of the story. They also say that they “will not publish text that is edited by AI,” as well as the fact that they will “not use AI-generated images instead of stock photography.” The rationale behind it refers to the idea that editing is inherently linked to determining what is relevant, original, and entertaining. Despite the restrictive tone of Wired’s guidelines, they state that they may try AI for tasks in the news reporting process like suggesting headlines or generating ideas or texts for short social media posts.

Nucleo lists some allowed uses of generative AI, such as summarizing texts for suggesting “alternative posts on social networks,” and as a tool for research “subject and themes.” Aftonbladet’s and VG’s journalists, for example, may “use AI technology in their work to produce illustrations, graphics, models.” Although the VG’s guidelines add a ban on generating photorealistic images as “AI-generated content must never undermine the credibility of journalistic photography.”

Insider, an American online outlet, states that their journalists may use generative AI for creating “story outlines, SEO headlines, copyediting, interview question generation, explaining concepts, and summarizing old coverage.” However, they state that their journalists must not use it to do the writing for them. For the use of AI generated images, they underline that they need “to have more conversations before they can decide if and how to use them.”

Hongkong Free Press (HKFP), an English newspaper in Hong Kong, is restricting all use of generative AI for any news writing, image generation or fact-checking, as “few A.I. tools include proper sourcing or attribution information.” Interestingly, HKFP does state that they might use internally-approved AI “for grammar/spell checking, for rewording/summarizing existing text written by our team, and for assisting with translations, transcriptions, or for research.” Similarly, De Volkskrant, a quality newspaper in The Netherlands, states that their editors do not publish journalistic work that has been generated by artificial intelligence.

CBC says that they will not use AI to recreate the voice or likeness of any CBC journalist or personality “except to illustrate how the technology works.” Interestingly, they link this exception to two conditions, namely the (1) “advance approval of our standards office,” and the (2) “approval of the individual being ‘recreated.'” Additionally, they will not use the technology for their investigative journalism in the form of facial recognition or voice matching, as well as not use it to generate voices for confidential sources whose identity they are trying to protect. They will continue practices that are understood by their audiences, such as voice modulation, image blurring, and silhouette.

The Swiss news organization, Heidi.News, states that they will only “use synthetic images for illustrative purposes, not for information purposes, so as not to confuse real-world events.” They also add that they will not publish any synthetic image that could pass for a photograph, “except for educational purposes when the image in question is already public.”

Le Parisien, a French newspaper, says that they reserve the right to use AI for the generation of text and images for illustrative purposes. They link the exceptional use of generative AI to the fact that they need to be transparent at all times: “We will make sure that the origin is explicitly stated for the reader.” Additionally, they describe the use of AI as an enrichment. “News workers may use these tools as they would a search engine, but they must always return to their own sources to guarantee the origin of their information.” The Financial Times underlines that they won’t publish photorealistic images generated by AI but they will explore the use of AI-augmented visuals (infographics, diagrams, photos) and when they do that, they will make it clear to the reader.

STT, a Finish press agency, links the banned uses of generative AI directly to its limitations. “STT does not use AI for data mining. The sources used by AI are often obscure, which makes its use in editorial work problematic.” They also state that the reliability of sources and the verification of information remains vital.

Accountability and responsibility

Accountability and responsibility are often mentioned in the guidelines in relation to the content published as well as values such as accuracy, fairness, originality, and transparency. The implementation of accountability measures for the use of data and AI products is also highlighted, as is using technically robust and secure AI systems to minimize risks.

Aftonbladet, a popular newspaper from Sweden, states that they are responsible “for everything they publish on the site, including material that is produced using, or based on, AI or other technology and falls under our publishing authority.” Reuters say that they will “implement and maintain appropriate accountability measures for our use of data and our AI products and services.” DPA, the German Press Agency, underlines that they will only use AI “that is technically robust and secure to minimize the risks for error and misuse.” Le Parisien, a French newspaper, states that they want to protect themselves against “any risk of error or copyright infringement, but also to support the work of artists, photographers and illustrators.”

The German Journalists’ Association (DJV) underlines that news organizations are responsible for their content and that editorial departments should establish regulated acceptance and approval processes for journalistic content when AI is involved. The Financial Times states that it’s their conviction that “their mission to produce journalism of the highest standards is all the more important in this era of rapid technological innovation.” They add: “FT has a greater responsibility to be transparent, to report the facts and to pursue the truth.”

Insider, an American online media company, says that their audience should trust them to be accountable and responsible for each story’s accuracy, fairness, originality, and quality. They state that their journalists are responsible for the accuracy, fairness, originality, and quality of every word in their stories.

Privacy and confidentiality

Privacy and confidentiality are often mentioned in terms of source protection and being careful with providing sensitive information to external platforms. Additionally, guidelines highlight that they should be careful about using confidential or unpublished content as input for generative AI-tools.

Aftonbladet describes that they “protect source protection and do not feed external platforms such as ChatGPT with sensitive or proprietary information,” the sentiment of which is also reflected in slightly different terms by VG: “Journalists should initially only share material that has been approved for immediate publication on VG’s platforms with AI services.” Reuters underscores that they will “prioritize security and privacy in our use of data and throughout the design, development and deployment of our data and AI products and services.” CBC says that they will not feed confidential or unpublished content into generative AI tools for any reason.” Mediahuis states that they should comply with privacy laws and where required obtain user consent before using personal info.

Ringier, a media group in Switzerland, says that their employees are “not permitted to enter confidential information, trade secrets or personal data of journalistic sources, employees, customers or business partners or other natural persons into an AI tool.” For code development, Ringier’s guidelines describe that code may only be entered in generative AI systems when they “neither constitute a trade secret nor does it belong to third parties.”

Interestingly, most of the guidelines we analyzed do not distinguish between generative AI services (e.g. OpenAI) versus developing and using a generative AI system hosted on computers operated by the organization itself, such as might be the case with open source models. It’s important to note, however, that the risks of privacy and confidentiality are more related to the use of generative AI services hosted by other organizations, rather than the use of generative AI itself.

Cautious experimentation

Mentions of cautious experimentation in the guidelines are often linked to being curious and critical while also underlining the potential for innovation. There is also emphasis on checking the veracity of the outputs that were generated by AI, as there is a focus on acknowledging the risks of misinformation and the “corruption of truth.”

VG’s main rule is that their journalists must treat “the use of text, video, images, sound and other content created with the help of AI (generative AI) in the same way as other sources of information; with open curiosity and with caution.” CBC states that they will never rely solely on AI-generated research in their journalism: “We always use multiple sources to confirm facts.” ANP says that every member of the ANP editorial staff should look at AI and related systems “full of wonder, inquisitive, critical and open to developments.” They add that they continue to actively pursue innovations with an open mind. STT, a Finish press agency, states that journalists should explore AI when they have the chance. They were one of the first news publishers to start experimenting with generative AI in the form of language models, and state that experimentation remains vital to uncover the possibilities and limitations of these models.

The Financial Times, a British newspaper, states that in their letter from the editor that they will embrace AI to provide services for readers and clients. Interestingly, they also underline that it is necessary for the FT that a team in the newsroom is established that can experiment with AI responsibly. They also add that “every technology opens exciting new frontiers that must be responsibly explored. But as recent history has shown, the excitement must be accompanied by caution over the risk of misinformation and the corruption of the truth.” Lastly, FT says that all newsroom experimentation with AI will be recorded in an internal register, including, to the extent possible, the use of third-party providers who may be using the tool.

Strategic intention of use

In several cases, guidelines documents also express the strategic goals of the organization in deploying generative AI. Motivations mentioned included a desire to enhance originality, quality, and even speed, while also sometimes reflecting desires for not replacing journalists and upholding core values like independence/impartiality.

The Financial Times says that AI has the potential to “increase their productivity and liberate reporters and editors’ time to focus on generating and reporting original content.” The Guardian states that they will also seek to use generative AI tools editorially only where it contributes to the creation and distribution of original journalism. In addition, they say that when they are using generative AI, they will focus on “situations where it can improve the quality of our work.” Mediahuis, a Belgian news corporation, states that AI should enhance their journalism. They say that their goal is to “enhance the quality of our journalism for our audience.” Insider underlines that their journalists may, and even should, use AI to make their work better, “But it remains your work,” they state, “and you are responsible for it. You are responsible to our readers and viewers.” ANP, the Dutch Press Agency, says that they want to remain impartial and independent and that they should be cautious with the use of generative AI.

DPA, the German Press Agency, states that they use AI for various purposes and that they are open to the increased use of AI. They add that AI will help them to make their work better and faster. De Volkskrant, a quality newspaper in The Netherlands, views artificial intelligence (AI) as a tool, “never as a system that can replace a journalist’s work,” and this sentiment of not intending to replace journalists with machines is echoed by Heidi.News and Le Parisien. Heidi.News, a Swiss news organization, states that AI systems should be regarded as tools, and when striving to be objective, their journalists should “not view these tools as sources of information.”

Training

Training is rarely mentioned in the guidelines, however, when it is mentioned, training and classes are mostly linked to mitigating the risks of generative AI and being accountable and transparent towards the audience.

Mediahuis states that training and qualification should be established for those responsible for AI decisions. They link it to the development of clear lines of accountability for AI development and use. The German Journalists’ Association (DJV) underscores that the use of artificial intelligence must become an integral part of the training and further training of journalists. They call upon media companies to create appropriate training that include the misuse of AI. The British newspaper, The Financial Times, states that they will provide training for their journalists on the use of generative AI for story discovery, as it will be delivered in the form of masterclasses.

Bias

The Guardian explicitly deals with bias in generative AI and states that they will “guard against the dangers of bias embedded within generative tools and their underlying training sets.” Mediahuis states that their journalists should watch out for biases in AI systems and work to address them. Ringier states that their tools shall always be “fair, impartial and non-discriminatory.”

Adaptability of guidelines

Several of the guidelines reflected humility in the face of rapid change and called out the importance of adapting guidelines over time as the understanding of risks evolves.

Nucleo, a digital native news outlet in Brazil, states that their policy on generative AI is “constantly evolving and may be updated from time to time.” The same sort of statement can be found in the guidelines of Aftonbladet and VG. Wired writes that AI will develop which “may modify our perspective over time, and we’ll acknowledge any changes in this post.” According to an editor’s note at the bottom of the guidelines document it has already been updated once. CBC states that they told their journalists that these guidelines are “preliminary and subject to change as the technology and industry best practices evolve.” De Volkskrant states that they follow the developments with critical interest and that where they find it necessary, their protocol will be adapted. Similarly, Ringier, a media group in Switzerland, states that their guidelines will be continuously reviewed in the coming months, and they will be adjusted if necessary.

Less-mentioned topics in the guidelines

Overall, there were some topics that weren’t that prominent in the guidelines. Legal compliance as well as personalization, data quality, user feedback, and integrating generative AI in the supply chain are rarely mentioned.

Supply chain

The AI supply chain gets at the idea of the network of suppliers for AI systems such as third-party models, collaborators, data providers, annotation providers, etc. The German Journalists’ Association (DJV) underlines the need for accuracy when it comes to the handling of data. They say: “Data collection, preparation and processing must meet high qualitative standards. Incompleteness, distortions and other errors in the data material must be corrected without delay.” Interestingly, the DJV calls on media houses to build their own and value-based databases and supports open data projects by public authorities and government institutions. They state that greater independence from commercial big-tech providers is desirable. Reuters is one of the few news organizations that mentions governing collaborations, as they state that they will “strive to partner with individuals and organizations who share similar ethical approaches to our own regarding the use of data, content, and AI.”

Legal compliance

DPA, the German Press Agency, says that they only use AI that complies with applicable law and legal requirements and that meets our ethical principles, such as human autonomy, fairness, and democratic values. The German Journalists’ Association (DJV) calls on legislators to make sure that the labeling of content generated by AI becomes mandatory. They call on the legislators to establish such labeling obligations in law.

Personalization

The German Journalists’ Association (DJV) states that personalization by means of AI must always be carried out in a responsible and balanced manner. They add that users should have “the option of changing the selection criteria and/or completely deactivating personalized distribution.”

User feedback

Mediahuis states that they encourage readers to give feedback and let them review their data. This reflects a commitment to transparency, accountability, and a customer-centric mindset in the evolving landscape of media consumption.

Some guidelines for guidelines

Based on our observations above, here we offer a few suggestions to journalists and news organizations that may be thinking about developing their own guidelines. By reviewing existing ethical guidelines and codes, adopting a risk assessment approach, and drawing on diverse perspectives within the organization, we think news organizations can improve the quality and helpfulness of their generative AI guidelines.

Review existing ethical guidelines and codes.

Several of the themes we found in our analysis, including things like accountability, transparency, privacy, and so on, reflect well-established values and ethical principles in journalism practice. Therefore, when crafting or updating guidelines in light of generative AI, we suggest that it would be valuable to review existing codes of conduct and journalism principles as a basis for thinking through how and whether those principles can be adhered to in the face of changes spurred by generative AI. For instance, although the idea of independence does come up a few times in the guidelines we analyzed it was not as prominent as one might expect given how central it is in, for instance, ethics codes such as SPJ’s. How might the use of generative AI fully embody and reflect a normative commitment to journalistic independence?

We emphasize that the emergence of generative AI might result in new challenges for news organizations, but this should not entail that the crafting of guidelines should start from scratch. We suggest news organizations go over such codes of conduct and contrast them — one-by-one — with the potential use of generative AI and the risks that may pose. By systematically working through core values and principles of journalism it should suggest strategies and tactics for how to use generative AI in ways that are consistent with established norms.

Adopt a risk assessment approach.

News organizations may benefit from adopting a systematic risk assessment approach in developing guidelines. Such an approach as developed by NIST, may for instance, help map, measure, and manage risks, offering ways to efficiently develop policies for mitigating risks that are identified. What are the different risks of the uses of generative AI, how can those risks be measured and tracked, and how can these be managed with respect to the existing values and goals a news organization has?

For instance, traditional journalistic practices emphasize the importance of relying on credible sources and ensuring the accuracy of information. However, generative AI introduces a level of uncertainty regarding the origin and reliability of content that might be acquired in the course of reporting. In light of generative AI, this risk mitigation could for example consider how to check the veracity of documents that are being shared as source materials. Journalists could navigate this uncertainty by implementing rigorous verification processes and using corroboration and triangulation techniques to validate AI-generated content. By cross-referencing information from multiple sources, newsrooms could reduce the risk of disseminating unverified or false information.

Establish a diverse group to draft guidelines.

Although generative AI can also be used to produce computer code and can therefore impact all sorts of different news products, we found few guidelines that distinguish how there may be different approaches between editorial and product usage (e.g. in software development). Moreover, other parts of news organizations such as sales and marketing could also benefit from using generative AI. Should they have separate guidelines? To cover the full ground here we suggest that it is important to establish a diverse set of stakeholders within your news organization to discuss and appropriately scope the guidelines. This will allow organizations to reflect on the risks that might arise in the newsroom, and it might help to determine broader, company-wide risks that transcend the day-to-day workflows of journalists.

Hannes Cools is a postdoctoral researcher at the AI, Media, and Democracy Lab in Amsterdam. Nick Diakopoulos is a professor of computational journalism at Northwestern University. This article originally ran on Nick’s Generative AI in the Newsroom.

  1. On the methodology: We collected the guidelines and translated the ones that were not in English using DeepL or Google Translate. After having all of them in a document which we both have access to, we added open codes. After having read all of the guidelines, we discussed a more overarching coding scheme. This resulted in a more robust overview of the codes which was used as a general structure to write this blog post. []
  2. On the selection of guidelines: We decided not to include the guidelines on AI that were already published by Bayerischer Rundfunk (BR) in Germany and BBC in the U.K., as we wanted to analyze the more recent guidelines that specifically deal with generative AI. Nevertheless, we acknowledge the pioneering work these news outlets have done in paving the way for the use of AI in newsrooms. []
POSTED     July 11, 2023, 12:24 p.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
A year in, The Guardian’s European edition contributes 15% of the publisher’s pageviews
After the launch of Guardian Europe, one-time donations from European readers increased by 45%.
Press Forward awards $20 million to 205 small local newsrooms
In response to the volume and quality of applications, Press Forward doubled the funding and number of grantees for this open call.
Midwestern news nonprofit The Beacon shuts down its Wichita newsroom
“We’ve realized that we can’t do it all, and have made the decision to no longer have a staffed newsroom in Wichita.”