Nieman Foundation at Harvard
HOME
          
LATEST STORY
Two-thirds of news influencers are men — and most have never worked for a news organization
ABOUT                    SUBSCRIBE
July 19, 2018, 8:41 a.m.
Reporting & Production

The universe of people trying to deceive journalists keeps expanding, and newsrooms aren’t ready

“It’s going to be a while before we really have an understanding of how we work to combat it beyond the traditional methods that we have used for a few years now.”

Editor’s Note: Heather Bryant is a journalist and developer affiliated with the journalism program and the John S. Knight Journalism Fellowships at Stanford University. She was asked by Jay Hamilton at the Stanford Department of Communication to research and report about the near future of the manipulation or AI-driven generation of false video, images and audio and how newsrooms are starting to strategize to handle it; the following article summarizes what she’s learned. R.B. Brenner edited this story..

Robyn Tomlin has led newsrooms in New York, Texas and North Carolina over the past two decades, and amid a torrent of change she’s noticed a constant. The universe of people trying to deceive journalists keeps expanding.

“When the intent goes up, people will find different technologies to try to help support the mission that they have, which in this case is to mislead us,” says Tomlin, McClatchy’s regional editor for the Carolinas. Previously, she was managing editor of The Dallas Morning News and editor of Digital First Media’s Project Thunderdome.

It’s as if journalists like Tomlin, in newsrooms small and large, have been playing a video game without the ability to change its settings. Every year, they level up into a new class of challenges, with more antagonists, more complicated storylines and an adversarial machine that seems to know their next moves.

The accelerating development of artificial intelligence–driven tools that can manipulate or manufacture from scratch convincing videos, ​photos​, and audio files represents the demarcation of a new level of difficulty. At this point, most of us have seen the ​deepfake​s — the ​Jordan Peele/Obama ​and ​Belgian Trump videos​ and ​more​.

Researchers from the University of Washington have developed tools that can generate realistic video of a person based on audio files. Screenshot from video.

The problem for newsrooms is threefold: how to identify sophisticated manipulations, how to educate audiences without ​inducing apathy and deepening mistrust​, and how to keep the growth of this technology from casting doubt on legitimate and truthful stories.

Perhaps the biggest challenge will be managing the multifaceted levels of difficulty, says Mandy Jenkins, the director of news for Storyful, a social intelligence platform that many newsrooms rely on for verification of user-generated content. Jenkins will be a 2018-19 ​John S. Knight Journalism Fellow​ at Stanford University, where she and others will tackle challenges facing the future of journalism.

“As the AI gets better, it’s going to become pretty seamless. So the thing that we’re going to have to study more is the source,” Jenkins says. “Who is sharing this? What incentive do they have? Where did this really come from? Provenance will have to become more important than ever because you won’t be able to rely on things like location. Corroboration will become important, and maybe more difficult, depending on what the video is, but there should be reporting we can do around that. Alibis, figuring out other stories. Who else has seen this? Who else knows about it? Who filmed it?”

An arms race for the truth

There are ​technology efforts​ to create software that can help with verification and provenance. These include ​software that can detect anomalies​ and digital file signatures that record every step from a photo, video or audio clip’s creation to the point of dissemination, documenting all changes made. However, most of these tools are not yet ready or widely available to news organizations.

Hany Farid​, a digital forensics expert at Dartmouth College, notes that the arms race between better manipulation tools and better detection tools hasn’t worked out in favor of the truth so far. ​“Today, the reality is there’s a handful of people in the world who can probably authenticate this type of digital media, but that obviously doesn’t scale,” he says. “You can’t keep calling three people in the world every time you have to authenticate something.” Not long ago, newsrooms like The New York Times would contact Farid to help authenticate material and give him a day to work on it. Now with hyperfast news cycles, journalists want the intricate analysis done in hours.

The Washington Post has confidence in videos and photos it receives from The Associated Press, Reuters and Storyful, relying “heavily on their verification processes — processes that we understand, that we’ve vetted, that we have a clear idea of what they do to authenticate video,“ says senior editor Micah Gelman, the Post’s director of video. Gelman’s team has yet to develop anywhere near the same level of confidence in user-generated content.

Gelman points to ​misrepresenting images​ and video as the more common form of attempted deception currently, such as photos presented as present-day images of a place like Syria when they are actually from older or unrelated events. Washington Post teams are training with experts to get a handle on how to move beyond verification practices that have typically relied on spotting human-introduced errors. Such errors are things like ​imprecise removal​ of someone or something, ​inaccurate shadows​, missing reflections or breaking the laws of physics in some way, or rough cuts that can be spotted by frame or waveform analysis.

“It’s going to be a while before we really have an understanding of how we work to combat it beyond the traditional methods that we have used for a few years now,” Gelman says. “I think it is a dramatic game changer. We have never as a society faced…the power of this fraud. When you go back to looking at all the people who are still falling for really basic fake articles and things like that, when you add to it now the power of it being something you can watch, with a voice and a face that is of a public figure saying something? Oh, I think this has massive potential to create all kinds of disruptions all over the world.”

In the near future, photo apps that ​perfect images​ automatically or allow alterations like ​changing the time of day​ will be as easy to use as an Instagram filter. Although the manipulation technology is available now, Storyful’s ​Jenkins believes there is still enough time before it becomes widespread for newsrooms to invest in training and public education to offset the impact of high-quality fakes.

A system created by researchers from Google and MIT can automatically retouch images in the style of a professional photographer. It can run on a mobile phone and it’s so fast that it can display retouched images in real-time, so that the photographer can see the final version of the image while still framing. (Courtesy of MIT News)

“It’s not going to be available to everyone at that high-level quality for a while, and hopefully we can sort of stem the tide of public education about it and media education about it,” Jenkins says. “It is up to us to make sure that we as journalists are doing our part to continue to educate and expose and be transparent about that process, since we certainly can’t consistently trust the technology companies. It would be nice to, but I don’t.”

In whatever time they do have to prepare, newsrooms will need to get much better at adopting verification and provenance technology to their workflows and deadline pressures — and figure out how to pay for it. Few newsrooms beyond those with substantial technical resources could even consider developing their own technology solutions. McClatchy’s Tomlin says, “I think it’s a legitimate concern, particularly for small and mid-sized newsrooms that don’t have the experience, the technology, the know-how to be able to do the kind of vetting that needs to get done, or, for that matter, that don’t even necessarily know what to be looking for.”

Most organizations are quite behind at using advanced digital tools. An International Center for Journalists ​survey on the state of technology in global newsrooms​ found that 71 percent of journalists use social media to find stories, but just 11 percent use social media verification tools. Additionally, only five percent of newsroom staff members have degrees in technology-related fields. Overall, the study concludes that journalists are not keeping pace with technology.

The problem is here and now

Experts on digital manipulation convey a greater sense of urgency than the journalists do in preparing for this threat.

“I think none of us are prepared for it, honestly, on many levels. This is not just a journalism issue, it’s not a technology issue, it’s also a social media issue,” says ​Farid, who focuses on digital forensics, image analysis and human perception. He works on problems such as developing tools and practices to detect child exploitation or extremist content.

Farid is part of a ​Defense Advanced Research Project Agency​ project called ​Medifor​ that’s working to put forensic techniques into the hands of law enforcement and journalists. He has also publicly spoken about the need for a ​global cyber ethics commission​ to hold accountable companies that facilitate the spread of misinformation.

Aviv Ovadya​, chief technologist at the Center for Social Media Responsibility, breaks the problem down into production, distribution, and consumption. Production is the creation of the software and apps that make it easier to manipulate or generate deceptive material and the accessibility of those tools. Distribution entails the platforms where people are exposed to manipulations. Consumption is the effect these manipulations have on audiences and how it alters their perception of the world.

Currently, Ovadya says, there isn’t any infrastructure for facilitating responsibility on the part of the engineers creating the technology, whether they are working on behalf of tech companies or in academia. “You need people who are paid, that this is their job. Their job is to understand the impact, the unintended consequences. And that doesn’t currently exist. And it really needs to exist.”

At the same time, companies like Facebook, YouTube, and Twitter need to look beyond advertising-driven business models that reward virality, according to Farid. He argues that the platforms need to acknowledge that they are far more than neutral hosts.

“If it was just hosting [content], we could have a reasonable conversation as to how active a platform should be,” Farid says, adding, “But they’re doing more than that. They are actively promoting it.”

Farid points to the refugee crisis of the ​Rohingya​ people in Myanmar as an example of the dangers. Videos and posts that seek to stoke sentiment against the Rohingya people circulate across social platforms. ​”This is real lives at stake here and these companies have to decide what is their responsibility,” he says. “I think until they decide they’re going to do better, no matter how good we get on other fronts, we’re in deep trouble. Because at the end of the day, these are the platforms that are providing all this material for us.”

When examining distribution from the perspective of newsrooms, none have figured out how to keep up with the speed of viral misinformation rocketing around the ecosystem. Clay Lambert is the editorial director of a five-person newsroom at the Half Moon Bay Review in Northern California, where he says he lacks the people and technology to detect and verify at the speed required. ​“Things could be generated out of whole cloth that would look perfectly normal. So, God bless for thinking that journalism’s going to catch all the bad actors,” he says.

On a local level, the more likely threat stems from exaggerations with the intent to distort community perceptions, hide malfeasance, or spread intolerance. ​”The concern isn’t video or images from events where there are multiple witnesses, but the times when it’s something unique or unusual, something that nobody else caught,” Lambert says.

Regardless of how verification evolves, The Washington Post’s Gelman does not see a future in which newsrooms rely less on user generated content to see what’s happening in the world.

“The problem is caused by an opportunity, right? Everybody’s walking around with a video camera now. Everyone. And it’s their phone. So that is a journalistic opportunity to provide coverage that we never had before,” he says. “We can’t say we’re not going to do that anymore. We can’t take that great storytelling opportunity brought to us by technology. What we need to admit is that like all great storytelling opportunities that come from technology, there is a risk, and it’s our job to make sure, to mitigate that risk and not make mistakes.”

Capacity and transparency

For many newsrooms, risk mitigation looks like a comprehensive series of fact-checking steps and layering in transparency.

Versha Sharma​ ​is the managing editor and senior correspondent at NowThis, a distributed news video company that publishes exclusively on social platforms. ​Sharma emphasizes the importance of NowThis’s multi-tiered system. ​”It is of the highest prioritization for us because we are entirely social video,” Sharma says. “We make sure that fact-checking and verification is at the top of every producer’s mind.”

NowThis frequently shows the ​source of footage​ within its videos and again during the closing credits.

“We ask our producers to credit footage no matter where it came from, whether it’s a paid subscription, whether it’s an individual, whether it’s been crowdsourced — whatever it is. So that our audience can always tell,” Sharma says.

Transparency is one of the tactics most likely to help newsrooms counter the now all-too-common claims of fake news. It’s not just that the technology to create sophisticated manipulations or generated media is being used to deceive. It’s that the very existence of the technology makes it easy for critics to claim ​legitimate reporting and media have been faked​.

“The scariest part is that [new technologies will] be an easy tool for, whether it be politicians or pundits or whoever wants to do it, whoever wants to continue sowing distrust. This is a very easy tool for them,” Sharma says.

Across the board, journalists are thinking about how to educate audiences so they understand both the technology and the ethics of digital video, photo and audio production. Newsrooms have their work cut out for them. Recent research from the American Press Institute’s Media Insight Project on ​what Americans and the news media understand about each other​ concludes,​ “We have a public that doesn’t fully understand how journalists work, and journalism that doesn’t make itself understandable to much of the public.”

Many journalists are starting to realize that there is no final fight to win–only more levels of difficulty.

“I think this is going to be a situation where we get burned and several places get burned and burned hard,” Tomlin says, “before we start really trying to clue in to how serious this is.”

Researchers from multiple universities have created technology that allows for the real-time change of a source video’s speaker expressions and movement. (Screenshot from video)

POSTED     July 19, 2018, 8:41 a.m.
SEE MORE ON Reporting & Production
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Two-thirds of news influencers are men — and most have never worked for a news organization
A new Pew Research Center report also found nearly 40% of U.S. adults under 30 regularly get news from news influencers.
The Onion adds a new layer, buying Alex Jones’ Infowars and turning it into a parody of itself
One variety of “fake news” is taking possession of a far more insidious one.
The Guardian won’t post on X anymore — but isn’t deleting its accounts there, at least for now
Guardian reporters may still use X for newsgathering, the company said.