Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
July 13, 2018, 8:30 a.m.
Audience & Social

Facebook might downrank the most vile conspiracy theories. But it won’t take them down.

Plus: (Some) researchers can now get access to (some) Facebook data, WhatsApp is funding misinformation research too, and susceptibility to fake news may have more to do with laziness than partisanship.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“Just for being false, that doesn’t violate the community standards.” CNN’s Oliver Darcy asked John Hegeman, the head of Facebook News Feed, why InfoWars is allowed to maintain a page (with over 900,000 followers) when it is notorious for spreading fake news and conspiracy theories like that 9/11 was an inside job and that the Sandy Hook shooting never happened.

Hegeman said that the company does not “take down false news.”

“I guess just for being false that doesn’t violate the community standards,” Hegeman said, explaining that InfoWars has “not violated something that would result in them being taken down.”

Hegeman added, “I think part of the fundamental thing here is that we created Facebook to be a place where different people can have a voice. And different publishers have very different points of view.”

This thread, by BuzzFeed’s Davey Alba, has more from the same event. Facebook seems okay downranking these sources in its algorithm. But it doesn’t seem to want an actual human to make the decision to go ahead and ban them altogether.

Facebook followed up on Twitter:

Through Social Science One, researchers can get access to Facebook data. Social Science One, launched Wednesday, is an independent research commission that will give social scientists access to previously private Facebook data. The initiative, announced back in April, is funded by outside organizations, and the research won’t be subject to Facebook’s approval. Robbie Gonzalez reported in Wired:

Starting today, researchers from around the world can apply for funding and data access that Social Science One will approve — not Facebook. If researchers want to search for something in the platform’s data that could make it look bad — or if they actually find something — Facebook won’t be able to pump the brakes.

To track opportunities and find out more:

All information will be posted at SocialScience.One, with notifications on Twitter and Facebook. Researchers can also sign up for our mailing list. Social Science One will release regular RFI’s (requests for information) and RFP’s (requests for proposals). Formal proposals will all be submitted through Social Science Research Council (SSRC). Proposals will be accepted on a rolling basis, with reviews scheduled periodically. Detailed codebooks for the available datasets to analyze will be available at SocialScience.One. Over time, we will add new types of datasets, and most existing data sets will grow as more data come in.

WhatsApp wants to enable more research about misinformation, too (but won’t give data). The Facebook-owned WhatsApp will fund research into the spread of misinformation on the platform. “The program will make unrestricted awards of up to $50,000 per research proposal,” and grantees will be invited (travel and lodging paid) to two workshops. Applications close August 12, 2018, at 11:59 pm PT. Unlike with Social Science One, however, “no WhatsApp data will be provided to award recipients.”

Speaking of WhatsApp, this week it launched a feature that indicates when a message has been forwarded; the changes, the Financial Times reports, come after “a spate of lynchings in India that were alleged to have been sparked by false WhatsApp rumors.” WhatsApp is also running fake news warnings in Indian newspapers, including information about the new forwarding feature. India is WhatsApp’s largest market, with 200 million users.

“A bit more effort might go a long way.” New research from Gord Pennycook and David Rand suggests that susceptibility to fake news is driven less by strong partisanship and more just by lazy/non-analytical thinking. This is maybe a good thing, if laziness is an easier problem to target (which, is it?) Anyway, the research included 3,446 participants on MTurk. Pennycook and Rand write:

Individuals who are more willing to think analytically when given a set of reasoning problems (i.e., two versions of the Cognitive Reflection Test) are less likely to erroneously think that fake news is accurate. Crucially, this was not driven by a general skepticism toward news media: More analytic individuals were, if anything, more likely to think that legitimate (“real”) news was accurate. All of the real news stories that we used — unlike the fake ones — were factually accurate and came from mainstream sources. Thus, our evidence indicates that analytic thinking helps to accurately discern the truth in the context of news headlines. More analytic individuals were also better able to discern real from fake news regardless of their political ideology, and of whether the headline was Pro-Democrat, Pro-Republican, or politically neutral; and this relationship was robust to controlling for age, gender, and education

They found some differences based on political ideology:

The overall capacity to discern real from fake news was lower among those who preferred Donald Trump over Hillary Clinton, relative to those who preferred Hillary Clinton over Donald Trump (the one exception being that in Study 2, those who preferred Trump were better at discerning Republican-consistent items)…

The present results indicate that there is, in fact, a political asymmetry when it comes to the capacity to discern the truth in news media. Moreover, the association between conservatism and media truth discernment held independently of CRT performance. This may help explain why Republican-consistent fake news was apparently more common than Democrat-consistent fake news leading up to the 2016 Presidential election (Allcott & Gentzkow, 2017; Guess, Nyhan, & Reifler, 2018) and why the media ecosystem (including open web links, and both Twitter and Facebook sharing) is more polarized on the political right than on the left in the U.S. (Faris et al., 2017). Nonetheless, it remains unclear precisely why Republicans (at least in Mechanical Turk samples) are apparently worse at discerning between fake and real news.

Planning your 2018 travel? Here’s a calendar of digital disinformation–related events!

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     July 13, 2018, 8:30 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”