Nieman Foundation at Harvard
HOME
          
LATEST STORY
Here are four things we still don’t know about trust in news
ABOUT                    SUBSCRIBE
Jan. 7, 2020, 10:42 a.m.
Audience & Social
LINK: about.fb.com  ➚   |   Posted by: Laura Hazard Owen   |   January 7, 2020

The Washington Post first reported late Monday, and Facebook confirmed, that the social media company will ban manipulated videos it considers “deepfakes.” Here’s how Facebook defines that:

— It has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:

— It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

Deepfakes, per this definition, remain super-rare outside technology demos and porn, and some researchers argue their threats are overblown. “Deepfakes are no more scary than their predecessors, ‘shallowfakes,’ which use far more accessible editing tools to slow down, speed up, omit or otherwise manipulate context. The real danger of fakes — deep or shallow — is that their very existence creates a world in which almost everything can be dismissed as false,” First Draft’s Claire Wardle wrote in The New York Times last year.

And shallowfakes are still allowed. Facebook again:

This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.

That would seem to mean, the Post points out, that videos like “a deceptively edited clip of House Speaker Nancy Pelosi that went viral on the social network last year,” which was slowed down to make Pelosi appear drunk, would still get through. (“We don’t have a policy that stipulates that the information you post on Facebook must be true,” Facebook said at the time.)

Show tags Show comments / Leave a comment
 
Join the 50,000 who get the freshest future-of-journalism news in our daily email.
Here are four things we still don’t know about trust in news
Are platforms damaging publishers’ brands? And how much is too much transparency?
How The New York Times prepared for the ultimate stress test — the 2020 election
Senior vice president of product engineering Brian Hamman describes prepping for unusual results, multiple needles, and vote counting that stretched for days. “Essentially, any time after the election, we could be sending out a push notification calling the election and bringing massive traffic to our site.”
Searching for the misinformation “twilight zone”
The ocean’s twilight zone is, first and foremost, a reminder that our understanding of misinformation online is severely lacking because of limited data.