Nieman Foundation at Harvard
HOME
          
LATEST STORY
Trouble in paradise? How the struggles of two Hawaiian paywalls reflect larger industry trends
ABOUT                    SUBSCRIBE
Jan. 21, 2015, 3:27 p.m.
5807598931_c920798e96_b

What does Facebook’s new tool for fighting fake news mean for real publishers?

Tweaks to the News Feed algorithm that push back against satire and hoaxes suggest that Facebook wants better content — but is pushing responsibility on its users.

Facebook announced yet another tweak to the algorithm that governs its users’ News Feeds yesterday. The social network has introduced a new tool that allows users to flag a post as “a false news story.” The move follows a few other attempts by the platform to better delineate different types of content. For example, in August, it was reported that the company was experimenting with satire tags meant to help users differentiate between parody and news. They’ve also taken steps to push back against clickbait.

fb_newsroom_spam_360Importantly, Facebook doesn’t do any of this tagging itself. Instead, it relies on its over one billion users to recognize and label links, videos, and photos that they perceive to be hoaxes. In an email, a Facebook spokesperson emphasized that the update is merely an additional signal helping to guide the PageRank algorithm. (“This is an update to the News Feed ranking algorithm. There are no human reviewers or editors involved. We are not reviewing content and making a determination on its accuracy, and we are not taking down content reported as false.”)

Of course, there are humans involved in reviewing fake news content — just not ones who work for Facebook. But as Dartmouth assistant professor of government Brendan Nyhan suggests, at this point Facebook simply delivers too much content for its own human moderation to be feasible. “I think if they tried to put a human in the loop of the content moving through their platform, they would have to have an army,” he says. “Human moderation doesn’t scale well. Would you prefer a human doing this? I’m not sure I would. It requires a lot of background knowledge to determine what’s true and what’s false.”

It would be an exaggeration to say that fake news sites have plagued Facebook, but links to stories containing false information meant to drive traffic do exist and can be misleading to readers across the Internet. Adrienne LaFrance, a former Nieman Lab staffer now a senior associate editor at The Atlantic, started a column called Antiviral at Gawker a year ago that was aimed at debunking viral hoaxes. She says users might not always find it as easy as Facebook expects to tell truth from fiction.

“Facebook is adding a layer of what looks like editorial accountability without actually taking on the responsibility of figuring out what’s true,” she wrote in an email. “So Facebook gives the impression that it is an editorial gatekeeper, but there’s still this buffer that protects Facebook from having to actually explain its thinking the way a newsroom would have to.”

Of course, Facebook isn’t taking aim at mainstream news outlets that get duped by hoaxers with this measure; their target is much more narrow. From the press release: 

The vast majority of publishers on Facebook will not be impacted by this update. A small set of publishers who are frequently posting hoaxes and scams will see their distribution decrease.

Craig Silverman, a fellow at Columbia’s Tow Center, recently founded Emergent.info, a “a real-time rumor tracker” that “aims to develop best practices for debunking misinformation.” He’d reached out to Facebook before yesterday’s announcement in the hopes that they would take some kind of action against these sites that deliberately circulate false information.

“What they really try to do is jump on things that are already in the news, or celebrities — stuff that has some level of consciousness in the public,” Silverman says of these sites. “They say, based on the story that’s already out there, what can we do that gets a reaction out of people?” Silverman keeps a list of around 16 repeat offenders — including The Daily Currant, National Report, World News Daily Report, Empire News, ScrapeTV, and more — which he sent to Facebook, knowing they wouldn’t blacklist the sites, but hoping they would take some sort of action.

Facebook has displayed previous interest in debunking rumors and hoaxes. In the past year, they’ve published two papers that track how rumors spread. In one study, they looked at how users reacted to having their mistaken judgment pointed out to them by friends, typically by copy-pasting a link from Snopes.com, the rumor-fighting website. What they found was that “people are two times more likely to delete hoaxes after receiving a comment from a friend about it being a hoax.”

But users are also made uncomfortable by having attention drawn to their mistake, which can decrease interaction and engagement on the site. “By debunking this stuff, you look like a kill joy. You look like a know-it-all,” says Silverman. That finding has, naturally, influenced the way Facebook built its own anti-hoax tool “They don’t want to put up barriers to sharing, or create negative experiences for people who have done the sharing,” Silverman adds. By introducing a crowd-based user tagging system that de-ranks hoax posts, rather than a more direct or aggressive approach, Facebook is attempting to maintain a sense of neutrality in the News Feed.

Facebook says the false news tag is just one in a suite of tools they use to guide its algorithm. But as long as they’re relying on automation, it’s conceivable that users could band together to abuse the tool.

Twitter has already encountered a version of this problem. In November, a New York Times story about Florida State University football players who received preferential treatment from the police was flagged as spam, which caused the URL to take readers to a warning page. Though Twitter hasn’t made clear exactly what happened, what’s evident is that user spam tags can cause errors with impact for publishers. (Twitter hadn’t gotten back to me before publication time.)

“In my research, I’ve found people can be very resistant to unwelcome information,” says Nyhan. “I wonder if people would report things as hoaxes that they don’t like. Imagine you see a story about climate change, and you don’t believe in climate change. If enough people do that, does it start monkeying with the algorithm in problematic ways?”

In response to questions about how they would deal with such an attack, a Facebook spokesperson would only say: “Reporting a story as false is another negative signal, similar to reporting a post as spam. Using a range of signals in ranking helps guard against abuse.”

If the tweak works, that will be good news for publishers who won’t have to compete as directly with fake, viral stories. Facebook, long a big driver of news traffic, grew even bigger in 2014, with many Facebook users getting little news from other sources.

Silverman said it’s important to remember Facebook’s moves are based in self-interest: a better user experience means more engaged users, which means more profit. “They want news producers and content producers to put content on Facebook and do revenue shares. They want that environment to be good for monetization,” he says. “They want people to have a good experience and not say, ‘Everything I saw on my News Feed is garbage.'”

Photo by Franco Bouly used under a Creative Commons license.

POSTED     Jan. 21, 2015, 3:27 p.m.
SHARE THIS STORY
   
Show comments  
Show tags
 
Join the 15,000 who get the freshest future-of-journalism news in our daily email.
Trouble in paradise? How the struggles of two Hawaiian paywalls reflect larger industry trends
The Honolulu Civil Beat and the Honolulu Star-Advertiser both introduced paywalls a couple of years ago. Now their strategies are showing signs of stagnation.
Newsonomics: Eight questions (and answers) about Nikkei’s surprise purchase of the FT
The quest for one of the world’s top news brands ended with an unexpected winner today. Here’s why.
Citing “an inflection point in global media,” Pearson sells the Financial Times to Nikkei
Pearson’s CEO: “What the FT really needed was to be part of an organization absolutely focused on journalism.”
What to read next
1119
tweets
New Pew data: More Americans are getting news on Facebook and Twitter
A new study from the Pew Research Center and Knight Foundation finds that more Americans of all ages, races, genders, education levels, and incomes are using Twitter and Facebook to consume news.
877Newsonomics: 10 numbers that define the news business today
From video to social, from mobile to paywalls — these data points help define where we are in the “future of news” today, like it or not.
534Putting the public into public media membership
Getting beyond tote bags and pledge drives is critical to the sustainability of public media. Is there an alternative vision of membership that relies on relationships more than money?
These stories are our most popular on Twitter over the past 30 days.
See all our most recent pieces ➚
Encyclo is our encyclopedia of the future of news, chronicling the key players in journalism’s evolution.
Here are a few of the entries you’ll find in Encyclo.   Get the full Encyclo ➚
Mother Jones
The Christian Science Monitor
FiveThirtyEight
The Chronicle of Higher Education
Daily Kos
NBCNews.com
Texas Tribune
Bloomberg Businessweek
Flipboard
Sports Illustrated
Global Voices
MediaNews Group