Facebook announced yet another tweak to the algorithm that governs its users’ News Feeds yesterday. The social network has introduced a new tool that allows users to flag a post as “a false news story.” The move follows a few other attempts by the platform to better delineate different types of content. For example, in August, it was reported that the company was experimenting with satire tags meant to help users differentiate between parody and news. They’ve also taken steps to push back against clickbait.
Importantly, Facebook doesn’t do any of this tagging itself. Instead, it relies on its over one billion users to recognize and label links, videos, and photos that they perceive to be hoaxes. In an email, a Facebook spokesperson emphasized that the update is merely an additional signal helping to guide the PageRank algorithm. (“This is an update to the News Feed ranking algorithm. There are no human reviewers or editors involved. We are not reviewing content and making a determination on its accuracy, and we are not taking down content reported as false.”)
Of course, there are humans involved in reviewing fake news content — just not ones who work for Facebook. But as Dartmouth assistant professor of government Brendan Nyhan suggests, at this point Facebook simply delivers too much content for its own human moderation to be feasible. “I think if they tried to put a human in the loop of the content moving through their platform, they would have to have an army,” he says. “Human moderation doesn’t scale well. Would you prefer a human doing this? I’m not sure I would. It requires a lot of background knowledge to determine what’s true and what’s false.”
FB relying on users self-policing to control spread of fake news. but doesnt viral quality necessarily mean many already duped?
— ಠ_ಠ (@MikeIsaac) January 20, 2015
It would be an exaggeration to say that fake news sites have plagued Facebook, but links to stories containing false information meant to drive traffic do exist and can be misleading to readers across the Internet. Adrienne LaFrance, a former Nieman Lab staffer now a senior associate editor at The Atlantic, started a column called Antiviral at Gawker a year ago that was aimed at debunking viral hoaxes. She says users might not always find it as easy as Facebook expects to tell truth from fiction.
“Facebook is adding a layer of what looks like editorial accountability without actually taking on the responsibility of figuring out what’s true,” she wrote in an email. “So Facebook gives the impression that it is an editorial gatekeeper, but there’s still this buffer that protects Facebook from having to actually explain its thinking the way a newsroom would have to.”
Of course, Facebook isn’t taking aim at mainstream news outlets that get duped by hoaxers with this measure; their target is much more narrow. From the press release:
The vast majority of publishers on Facebook will not be impacted by this update. A small set of publishers who are frequently posting hoaxes and scams will see their distribution decrease.
Craig Silverman, a fellow at Columbia’s Tow Center, recently founded Emergent.info, a “a real-time rumor tracker” that “aims to develop best practices for debunking misinformation.” He’d reached out to Facebook before yesterday’s announcement in the hopes that they would take some kind of action against these sites that deliberately circulate false information.
“What they really try to do is jump on things that are already in the news, or celebrities — stuff that has some level of consciousness in the public,” Silverman says of these sites. “They say, based on the story that’s already out there, what can we do that gets a reaction out of people?” Silverman keeps a list of around 16 repeat offenders — including The Daily Currant, National Report, World News Daily Report, Empire News, ScrapeTV, and more — which he sent to Facebook, knowing they wouldn’t blacklist the sites, but hoping they would take some sort of action.
Facebook has displayed previous interest in debunking rumors and hoaxes. In the past year, they’ve published two papers that track how rumors spread. In one study, they looked at how users reacted to having their mistaken judgment pointed out to them by friends, typically by copy-pasting a link from Snopes.com, the rumor-fighting website. What they found was that “people are two times more likely to delete hoaxes after receiving a comment from a friend about it being a hoax.”
But users are also made uncomfortable by having attention drawn to their mistake, which can decrease interaction and engagement on the site. “By debunking this stuff, you look like a kill joy. You look like a know-it-all,” says Silverman. That finding has, naturally, influenced the way Facebook built its own anti-hoax tool “They don’t want to put up barriers to sharing, or create negative experiences for people who have done the sharing,” Silverman adds. By introducing a crowd-based user tagging system that de-ranks hoax posts, rather than a more direct or aggressive approach, Facebook is attempting to maintain a sense of neutrality in the News Feed.
Facebook's outsourcing fake-news detection to users seems more about preserving illusion of unfiltered experience than actual effectiveness.
— Mark Coddington (@markcoddington) January 20, 2015
@markcoddington The thesis seems to be that human editing and curation is not to be trusted by readers. As if algorithms are neutral.
— Damon Kiesow (@dkiesow) January 20, 2015
Facebook says the false news tag is just one in a suite of tools they use to guide its algorithm. But as long as they’re relying on automation, it’s conceivable that users could band together to abuse the tool.
Twitter has already encountered a version of this problem. In November, a New York Times story about Florida State University football players who received preferential treatment from the police was flagged as spam, which caused the URL to take readers to a warning page. Though Twitter hasn’t made clear exactly what happened, what’s evident is that user spam tags can cause errors with impact for publishers. (Twitter hadn’t gotten back to me before publication time.)
“In my research, I’ve found people can be very resistant to unwelcome information,” says Nyhan. “I wonder if people would report things as hoaxes that they don’t like. Imagine you see a story about climate change, and you don’t believe in climate change. If enough people do that, does it start monkeying with the algorithm in problematic ways?”
In response to questions about how they would deal with such an attack, a Facebook spokesperson would only say: “Reporting a story as false is another negative signal, similar to reporting a post as spam. Using a range of signals in ranking helps guard against abuse.”
If the tweak works, that will be good news for publishers who won’t have to compete as directly with fake, viral stories. Facebook, long a big driver of news traffic, grew even bigger in 2014, with many Facebook users getting little news from other sources.
Silverman said it’s important to remember Facebook’s moves are based in self-interest: a better user experience means more engaged users, which means more profit. “They want news producers and content producers to put content on Facebook and do revenue shares. They want that environment to be good for monetization,” he says. “They want people to have a good experience and not say, ‘Everything I saw on my News Feed is garbage.'”