Prediction
AI will democratize disinformation
Name
Matt Skibinski
Excerpt
“This is not limited to audio and video tools. Text-generation tools like ChatGPT and Google’s Bard have incredibly poor defenses against producing misinformation.”
Prediction ID
4d6174742053-24
 

Right now, there’s a voicemail on my phone from the president.

“Hi, it’s Joe Biden,” his voice rings out clearly. “I’m calling on this Election Day to remind all of my friends in Madison, Wisconsin to get out and vote. And remember, due to a water main leak, your polling place has been moved.” Joe then lists an address in the next town over where I should go vote.

The voicemail, of course, is fake — generated by artificial intelligence trained on thousands of hours of Joe Biden’s speeches and public comments to mimic his voice perfectly. That I have such a file on my phone is not itself surprising — the company I manage, NewsGuard, tracks mis- and disinformation and rates media quality, so we encounter such AI-generated “deepfakes” regularly.

But what is surprising is that this fake recording of the president was not produced by a sophisticated Russian or Chinese disinformation operation, or even by a political dark-money group up to some dirty tricks. Instead, despite having no expertise in AI technology or audio editing, I made it myself. In less than five minutes. For free.

I produced my deepfake during a call with a civic integrity group that was trying to understand election integrity risks for 2024. They asked how hard it would be to produce a fake robocall from a politician. While continuing the call, I downloaded a “celebrity voice impersonation” app, produced the fake robocall, and played it for the (horrified) group.

It was so easy, anyone could do it.

And that’s why I predict that 2024 will be the year that AI will, unfortunately, democratize misinformation — empowering anyone with a keyboard to create fake news cheaply and at scale.

This is not limited to audio and video tools. Text-generation tools like ChatGPT and Google’s Bard have incredibly poor defenses against producing misinformation. Our team at NewsGuard conducted “Red Teaming” exercises for these tools and found that, when prompted with questions about known, even widely debunked misinformation narratives, AI chatbots repeated the misinformation 80-98% of the time, depending on the tool — despite widespread promises from the industry to invest in responsible AI development.

During a recent presentation to trust and safety industry professionals, I decided to demonstrate this problem in real time. It took precisely one prompt to get ChatGPT to write an article from a fake newspaper, the Madison Times-Index, reporting that “an unidentified box containing 27,428 uncounted ballots was discovered in the basement of a municipal building on the outskirts of Madison, Wisconsin.” The article was convincing and thorough, with quotes from made-up local election administrators and county officials — indistinguishable from any real article you might find in a local newspaper.

Consider that example in the context of the decline of local newspapers. At NewsGuard, we’ve tracked one major force filling that void: partisan “pink slime” sites funded by dark money on both sides of the political aisle that impersonate local newspapers to spread partisan propaganda. We project that these sites will outnumber real daily local newspapers in America this coming year. And now, AI companies have built such sites a perfect tool to create their misleading content fast, cheaply, and with astounding precision. Indeed, since the advent of OpenAI’s GPT-3.5, we’ve tracked the exponential growth of unreliable AI-generated news sites that publish AI-written content without human oversight or disclosure, including, often, false information.

You may be wondering: If AI is poised to supercharge misinformation, what can we, with our mere human intelligence, do to fight back?

The good news is that there are ways to combat this problem — but they require quick action from multiple stakeholders in technology and media.

First, AI companies that operate such tools need to safeguard their products against misuse. There are a number of ways this can be done — from fine-tuning models with data about known misinformation narratives and topics to adding guardrails that check prompts and responses for misinformation before providing responses to users. Our team at NewsGuard has adapted our datasets about false narratives and media reliability for this purpose with promising results — though we anticipate that we and others will need to continue innovating to keep up with new tactics deployed by malign actors.

Second, technology platforms from news aggregators to search engines to social media companies need to be especially vigilant about identifying what we at NewsGuard call “unreliable AI-generated news sites,: including political pink slime sites — and to empower users with context about these sources so that they can distinguish between real news and AI-generated fake news. Not only will this dampen the impact of AI-generated misinformation — but it will also surely help combat the erosion of users’ trust in online platforms.

Third, brands and advertisers need to remove the financial incentive for misinformation, which is funded in part by $2.6 billion in programmatic ad revenue each year. By responsibly advertising on credible news outlets while avoiding ad placements on AI-generated sites, political sites impersonating local media, and other unreliable sources that are likely to spread and amplify AI-generated misinformation, brands and advertisers can reach valuable and engaged audiences on news content while avoiding brand risk and not contributing to the broader misinformation problem. This requires more than simply blocking certain news-related keywords — as is the common practice within the so-called “brand safety” industry — and instead requires using expert third-party data to identify responsible media choices.

Last, those in the trust and credibility business, including news publishers, trust and safety teams, misinformation researchers, and others, need to view AI not just as a threat, but also as an opportunity. When left to its own devices, AI is a great misinformation superspreader. But when seeded with high-quality and precise human-curated information — the kind that journalists are in the business of producing every day — it can potentially be a force-multiplier for tracking and combating misinformation. The industry would be wise to invest heavily in research and development efforts to explore how this technology can become an ally rather than an enemy in the fight against misinformation.

Matt Skibinski is general manager of NewsGuard.

Right now, there’s a voicemail on my phone from the president.

“Hi, it’s Joe Biden,” his voice rings out clearly. “I’m calling on this Election Day to remind all of my friends in Madison, Wisconsin to get out and vote. And remember, due to a water main leak, your polling place has been moved.” Joe then lists an address in the next town over where I should go vote.

The voicemail, of course, is fake — generated by artificial intelligence trained on thousands of hours of Joe Biden’s speeches and public comments to mimic his voice perfectly. That I have such a file on my phone is not itself surprising — the company I manage, NewsGuard, tracks mis- and disinformation and rates media quality, so we encounter such AI-generated “deepfakes” regularly.

But what is surprising is that this fake recording of the president was not produced by a sophisticated Russian or Chinese disinformation operation, or even by a political dark-money group up to some dirty tricks. Instead, despite having no expertise in AI technology or audio editing, I made it myself. In less than five minutes. For free.

I produced my deepfake during a call with a civic integrity group that was trying to understand election integrity risks for 2024. They asked how hard it would be to produce a fake robocall from a politician. While continuing the call, I downloaded a “celebrity voice impersonation” app, produced the fake robocall, and played it for the (horrified) group.

It was so easy, anyone could do it.

And that’s why I predict that 2024 will be the year that AI will, unfortunately, democratize misinformation — empowering anyone with a keyboard to create fake news cheaply and at scale.

This is not limited to audio and video tools. Text-generation tools like ChatGPT and Google’s Bard have incredibly poor defenses against producing misinformation. Our team at NewsGuard conducted “Red Teaming” exercises for these tools and found that, when prompted with questions about known, even widely debunked misinformation narratives, AI chatbots repeated the misinformation 80-98% of the time, depending on the tool — despite widespread promises from the industry to invest in responsible AI development.

During a recent presentation to trust and safety industry professionals, I decided to demonstrate this problem in real time. It took precisely one prompt to get ChatGPT to write an article from a fake newspaper, the Madison Times-Index, reporting that “an unidentified box containing 27,428 uncounted ballots was discovered in the basement of a municipal building on the outskirts of Madison, Wisconsin.” The article was convincing and thorough, with quotes from made-up local election administrators and county officials — indistinguishable from any real article you might find in a local newspaper.

Consider that example in the context of the decline of local newspapers. At NewsGuard, we’ve tracked one major force filling that void: partisan “pink slime” sites funded by dark money on both sides of the political aisle that impersonate local newspapers to spread partisan propaganda. We project that these sites will outnumber real daily local newspapers in America this coming year. And now, AI companies have built such sites a perfect tool to create their misleading content fast, cheaply, and with astounding precision. Indeed, since the advent of OpenAI’s GPT-3.5, we’ve tracked the exponential growth of unreliable AI-generated news sites that publish AI-written content without human oversight or disclosure, including, often, false information.

You may be wondering: If AI is poised to supercharge misinformation, what can we, with our mere human intelligence, do to fight back?

The good news is that there are ways to combat this problem — but they require quick action from multiple stakeholders in technology and media.

First, AI companies that operate such tools need to safeguard their products against misuse. There are a number of ways this can be done — from fine-tuning models with data about known misinformation narratives and topics to adding guardrails that check prompts and responses for misinformation before providing responses to users. Our team at NewsGuard has adapted our datasets about false narratives and media reliability for this purpose with promising results — though we anticipate that we and others will need to continue innovating to keep up with new tactics deployed by malign actors.

Second, technology platforms from news aggregators to search engines to social media companies need to be especially vigilant about identifying what we at NewsGuard call “unreliable AI-generated news sites,: including political pink slime sites — and to empower users with context about these sources so that they can distinguish between real news and AI-generated fake news. Not only will this dampen the impact of AI-generated misinformation — but it will also surely help combat the erosion of users’ trust in online platforms.

Third, brands and advertisers need to remove the financial incentive for misinformation, which is funded in part by $2.6 billion in programmatic ad revenue each year. By responsibly advertising on credible news outlets while avoiding ad placements on AI-generated sites, political sites impersonating local media, and other unreliable sources that are likely to spread and amplify AI-generated misinformation, brands and advertisers can reach valuable and engaged audiences on news content while avoiding brand risk and not contributing to the broader misinformation problem. This requires more than simply blocking certain news-related keywords — as is the common practice within the so-called “brand safety” industry — and instead requires using expert third-party data to identify responsible media choices.

Last, those in the trust and credibility business, including news publishers, trust and safety teams, misinformation researchers, and others, need to view AI not just as a threat, but also as an opportunity. When left to its own devices, AI is a great misinformation superspreader. But when seeded with high-quality and precise human-curated information — the kind that journalists are in the business of producing every day — it can potentially be a force-multiplier for tracking and combating misinformation. The industry would be wise to invest heavily in research and development efforts to explore how this technology can become an ally rather than an enemy in the fight against misinformation.

Matt Skibinski is general manager of NewsGuard.