Nieman Foundation at Harvard
Twitter is turning off location data on tweets — a small win for privacy but a small loss for journalists and researchers
ABOUT                    SUBSCRIBE
Sept. 12, 2017, 3:05 p.m.
Audience & Social

Adding a “disputed” label to fake news seems to work, a little. But for some groups, it actually backfires

Labeling only some fake news stories as fake can make some people more likely to believe other fake news that aren’t labeled.

Does labeling fake articles shared on social media as fake actually convince people who might otherwise read and share them to think twice? It’s a fundamental question that Facebook itself isn’t doing too much to help answer by declining to share more comprehensive data around the impact of the fact-checking initiatives it’s running with organizations like the AP and Snopes.

The “disputed by third party fact-checkers” label on fake news articles circulated on Facebook has only a very modest impact on people’s perceptions, according to a working paper shared Tuesday by two researchers at Yale University, Gordon Pennycook — a psychology professor — and David G. Rand — an economics professor (Politico got the first look, their story here). (The two are also authors of a recent paper on the “cognitive psychological profiles” of the people who fall for fake news and the role of people’s “bullshit receptivity.”) But Pennycook and Rand also found that these tags could backfire within certain groups of people: Donald Trump supporters and those ages 18 to 25.

The study involved a total 7,534 participants judged the accuracy of fake news articles, across seven different iterations in which researchers showed participants story headlines (from actual articles, both real and fabricated, posted to Facebook in 2016 or 2017). In one of the experiments, with participants recruited through Amazon’s Mechanical Turk and asked in a binary choice whether they preferred Hillary Clinton or Donald Trump (emphasis ours):

The warnings were at least somewhat effective: fake news headlines tagged as disputed in the treatment were rated as less accurate than those in the control (warning effect), d=.20, z=6.91, p<.001. However, we also found evidence of a backfire: fake news headlines that were not tagged in the treatment were rated as more accurate than those in the control (backfire effect), d=.06, z=2.09, p=.037. This spillover was not confined to fake news: real news stories in the treatment were also rated as more accurate than real news stories in the control (real news spillover), d=.09, z=3.19, p=.001.... Although both groups evidenced a warning effect (Clinton, d=.21, z=3.19, p=.001; Trump, d=.16, z=2.84, p=.004) and a real news spillover (Clinton, d=.10, z=2.75, p=.006; Trump, d=.07, z=2.09, p=.083), the backfire effect was only present for those who preferred Trump, d=.11, z=2.58, p=.010, and not for those who preferred Clinton, d=.02, z=.49, p=.62 (although this difference between Trump and Clinton supporters was itself only marginally significant: meta-analytic estimate of interaction effect between condition and preferred candidate, z=1.68, p=.094). Furthermore, the backfire was roughly the same magnitude as the warning effect for Trump supporters…

…while participants 26 years and older showed a significant warning effect (N=4466), d=.23, z=7.33, p<.001, and no significant backfire effect, d=.03, z=.84, p=.402, the opposite was true for those 18-25 (N=805): for these younger subjects, the warning had no significant effect, d=.08, z=1.10, p=.271, and there was a relatively large backfire effect, d=.26, z=3.58, p<.001

Facebook announced recently that it would start to add publishers’ logos to articles shared on its platform. Pennycook and Rand also ran an experiment around that intervention, and found that the logos did nothing to change people’s judgment of the accuracy of headlines:

Increasing the salience of a headline’s source by displaying the publisher’s logo seems even less promising: we found no effect whatsoever on accuracy judgements. This result is surprising given the large body of evidence that source legitimacy impacts perceptions of accuracy (Pornpitakpan, 2004). It suggests that even well-established mainstream news outlets are not seen as especially credible, and thus perceived accuracy of (true) stories from these outlines is not increased by emphasizing source. However, it should be noted that we included publisher logos on every real and fake headline that was presented to our participants — it is possible that only including logos for verified and established sources would prove effective (although, given the results of our explicit warning experiment, there is reason to be skeptical).

The full working paper, not yet peer-reviewed, is available here.

Crop of Library of Congress image by Stuart Rankin.

POSTED     Sept. 12, 2017, 3:05 p.m.
SEE MORE ON Audience & Social
Join the 50,000 who get the freshest future-of-journalism news in our daily email.
Twitter is turning off location data on tweets — a small win for privacy but a small loss for journalists and researchers
For the past decade, location-tagged tweets have been a useful (if imperfect) tool for anyone trying to connect time, place, and information in ways that told us something about the world.
“News unfolds like a saga”: The French news site Les Jours wants to marry narrative, depth, and investigative reporting
“Serial” isn’t just a podcast: It’s also the format hook Les Jours uses to bring some of the lessons of drama to long-form investigative reporting. It’s a fascinating mish-mash of ideas you’ll recognize from short-run nonfiction audio, Quartz, Epic Magazine, and more.
Meet TikTok: How The Washington Post, NBC News, and The Dallas Morning News are using the of-the-moment platform
“When I was a beat reporter, I used to look at national news and say: How can I localize this? I feel like this is the other way around: What’s going on in our community that people can relate to across all platforms?”