Nieman Foundation at Harvard
HOME
          
LATEST STORY
With Hurricane Milton looming, NPR stations got a lower-bandwidth way to reach residents
ABOUT                    SUBSCRIBE
Sept. 12, 2017, 3:05 p.m.
Audience & Social

Adding a “disputed” label to fake news seems to work, a little. But for some groups, it actually backfires

Labeling only some fake news stories as fake can make some people more likely to believe other fake news that aren’t labeled.

Does labeling fake articles shared on social media as fake actually convince people who might otherwise read and share them to think twice? It’s a fundamental question that Facebook itself isn’t doing too much to help answer by declining to share more comprehensive data around the impact of the fact-checking initiatives it’s running with organizations like the AP and Snopes.

The “disputed by third party fact-checkers” label on fake news articles circulated on Facebook has only a very modest impact on people’s perceptions, according to a working paper shared Tuesday by two researchers at Yale University, Gordon Pennycook — a psychology professor — and David G. Rand — an economics professor (Politico got the first look, their story here). (The two are also authors of a recent paper on the “cognitive psychological profiles” of the people who fall for fake news and the role of people’s “bullshit receptivity.”) But Pennycook and Rand also found that these tags could backfire within certain groups of people: Donald Trump supporters and those ages 18 to 25.

The study involved a total 7,534 participants judged the accuracy of fake news articles, across seven different iterations in which researchers showed participants story headlines (from actual articles, both real and fabricated, posted to Facebook in 2016 or 2017). In one of the experiments, with participants recruited through Amazon’s Mechanical Turk and asked in a binary choice whether they preferred Hillary Clinton or Donald Trump (emphasis ours):

The warnings were at least somewhat effective: fake news headlines tagged as disputed in the treatment were rated as less accurate than those in the control (warning effect), d=.20, z=6.91, p<.001. However, we also found evidence of a backfire: fake news headlines that were not tagged in the treatment were rated as more accurate than those in the control (backfire effect), d=.06, z=2.09, p=.037. This spillover was not confined to fake news: real news stories in the treatment were also rated as more accurate than real news stories in the control (real news spillover), d=.09, z=3.19, p=.001.... Although both groups evidenced a warning effect (Clinton, d=.21, z=3.19, p=.001; Trump, d=.16, z=2.84, p=.004) and a real news spillover (Clinton, d=.10, z=2.75, p=.006; Trump, d=.07, z=2.09, p=.083), the backfire effect was only present for those who preferred Trump, d=.11, z=2.58, p=.010, and not for those who preferred Clinton, d=.02, z=.49, p=.62 (although this difference between Trump and Clinton supporters was itself only marginally significant: meta-analytic estimate of interaction effect between condition and preferred candidate, z=1.68, p=.094). Furthermore, the backfire was roughly the same magnitude as the warning effect for Trump supporters…

…while participants 26 years and older showed a significant warning effect (N=4466), d=.23, z=7.33, p<.001, and no significant backfire effect, d=.03, z=.84, p=.402, the opposite was true for those 18-25 (N=805): for these younger subjects, the warning had no significant effect, d=.08, z=1.10, p=.271, and there was a relatively large backfire effect, d=.26, z=3.58, p<.001

Facebook announced recently that it would start to add publishers’ logos to articles shared on its platform. Pennycook and Rand also ran an experiment around that intervention, and found that the logos did nothing to change people’s judgment of the accuracy of headlines:

Increasing the salience of a headline’s source by displaying the publisher’s logo seems even less promising: we found no effect whatsoever on accuracy judgements. This result is surprising given the large body of evidence that source legitimacy impacts perceptions of accuracy (Pornpitakpan, 2004). It suggests that even well-established mainstream news outlets are not seen as especially credible, and thus perceived accuracy of (true) stories from these outlines is not increased by emphasizing source. However, it should be noted that we included publisher logos on every real and fake headline that was presented to our participants — it is possible that only including logos for verified and established sources would prove effective (although, given the results of our explicit warning experiment, there is reason to be skeptical).

The full working paper, not yet peer-reviewed, is available here.

Crop of Library of Congress image by Stuart Rankin.

POSTED     Sept. 12, 2017, 3:05 p.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
With Hurricane Milton looming, NPR stations got a lower-bandwidth way to reach residents
In normal times, text-only websites are a niche interest. But a natural disaster is not normal times.
How a 19th-century news revolution sparked activists, influencers, disinformation, and the Civil War
Long before anyone was accused of being “woke,” the Wide Awakes used new news technology to rapidly construct a national movement.
How The New York Times incorporates editorial judgment in algorithms to curate its home page
The Times’ algorithmic recommendations team on responding to reader feedback, newsroom concerns, and technical hurdles.