Nieman Foundation at Harvard
HOME
          
LATEST STORY
With Hurricane Milton looming, NPR stations got a lower-bandwidth way to reach residents
ABOUT                    SUBSCRIBE
Sept. 8, 2017, 9:16 a.m.
Audience & Social

Factchecking works better when it’s between friends. (Then again, who wants to be the “snoper”?)

Plus: Facebook sold (at least?) $100,000 worth of political ads to Russia, and its factchecking partners are annoyed again.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Russia bought political ads on Facebook. Facebook accepted about (at least?) $100,000 worth of political ads from a “Russian ‘troll farm’ with a history of pushing pro-Kremlin propaganda” during the 2016 presidential election campaign, The Washington Post reported this week, reinvigorating criticism of the social media company’s lack of transparency. (For one particularly scathing look, published before the news about the Russian ads but including plenty of information on how Facebook handles advertising, here’s John Lancaster in the London Review of Books: “For all the corporate uplift of its mission statement, Facebook is a company whose essential premise is misanthropic…even more than it is in the advertising business, Facebook is in the surveillance business. Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind. It knows far, far more about you than the most intrusive government has ever known about its citizens.”)

Alex Stamos, Facebook’s chief security offer, posted:

— The vast majority of ads run by these accounts didn’t specifically reference the US presidential election, voting or a particular candidate.

— Rather, the ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.

— About one-quarter of these ads were geographically targeted, and of those, more ran in 2015 than 2016.

— The behavior displayed by these accounts to amplify divisive messages was consistent with the techniques mentioned in the white paper we released in April about information operations.

The New York Times also published an investigation into the methods that Russian operatives used to “spread anti-Clinton messages and promote the hacked material they had leaked” on Facebook and Twitter during the election.

ProPublica announced Thursday that it’s “launching a crowdsourcing tool that will gather political ads from Facebook, the biggest online platform for political discourse.” The tool will initially be used in the run-up to the German parliamentary election on September 24, and will then be used in other elections, including the 2018 U.S. midterms.

We are working with three news outlets in Germany — Spiegel Online, Süddeutsche Zeitung and Tagesschau. They will ask their readers to install our tool, and will use it themselves to monitor ads during the election.

The tool is a small piece of software that users can add to their web browser (Chrome). When users log into Facebook, the tool will collect the ads displayed on the user’s news feed and guess which ones are political based on an algorithm built by ProPublica.

One benefit for interested users is that the tool will show them Facebook political ads that weren’t aimed at their demographic group, and that they wouldn’t ordinarily see.

Beyond a crowdsourcing investigation, though: Senator Mark Warner (D-Va.), vice chair of the Senate Intelligence Committee, said there needs to be more transparency about who’s buying political ads online., and that Congress should “pass legislation to put disclosure requirements on social media advertising similar to those for television commercials.”

Facebook’s factchecking partners are also annoyed by its opacity. Snopes, PolitiFact, and others who partnered with Facebook on its factchecking initiative are, understandably, annoyed that they don’t have more information on whether what they’re doing is actually working, Politico’s Jason Schwartz reports.

[B]ecause the company has declined to share any internal data from the project, the fact-checkers say they have no way of determining whether the “disputed” tags they’re affixing to “fake news” articles slow — or perhaps even accelerate — the stories’ spread. They also say they’re lacking information that would allow them to prioritize the most important stories out of the hundreds possible to fact-check at any given moment.

A Facebook product manager told Politico that they “have seen data that, when a story is flagged by a third-party fact-checker, it reduces the likelihood that somebody will share that story,” but (obv) didn’t share specifics.

What do people think of factcheckers and factchecking? Two papers on that question this week. First: “Political factchecking on Twitter: When do corrections have an effect?Drew B. Margolin (assistant professor of communication at Cornell), Aniko Hannak, and Ingmar Weber looked at how social connections can make people more likely to accept factchecks on Twitter.

In this study, we collect a set of fact-checking interventions, which we refer to as ‘snopes,’ that respond to political rumors on Twitter. We then distill our observations to the subset of cases where (a) the snope is clearly a correction of a false idea; (b) the correction was unsolicited; and (c) the individual who is being corrected (the “snopee”) replies to the individual who corrected them (the “snoper”), allowing us to analyze their response to the correction. We then test hypotheses about how the relationship between snoper and snopee should influence these replies based on social network theory.

The researchers note that “although the direct consequences of false political information are hard to discern, individuals can experience negative social consequences of spreading false political information.”

[I]n our data we observe numerous instances of people rebuking those who have shared false information. These rebukes are often followed by attempts to save face. One individual asserted that Obamacare required U.S. citizens to be implanted with microchips. Five minutes later, a snoper sent her a link to a snopes.com refutation of the claim and said “Do some homework.” The snopee immediately replied to show they had done so: “I just did, right before you sent that LOL It wasn’t on snopes when I first seen it. I checked.”

(Here’s that one on Snopes, btw.)

The researchers found, consistent with their hypothesis, that “individuals who follow and are followed by the people who correct them are significantly more likely to accept the correction than individuals confronted by strangers.” There are also a bunch of wrinkles, though, leaving some questions:

— Are corrections more effective than they appear? “The source of the ineffectiveness of corrections observed in public discourse…may be due to the social position of these corrections, rather than an innate tendency for people to resist facts. In other words, corrections may appear to be ineffective because corrections from strangers are both more common and less likely to be accepted. These tendencies will serve to paint a bleak picture in any data set that does not account for the snoper–snopee relationship.”

— Do face-to-face corrections function in the same way as corrections on social media (probably not)? “It may be that stranger–stranger corrections are dominant on Twitter because of the unique opportunity that social media provide for individuals to carry on conversations with complete strangers. In other conversational settings, people spend their time talking to their friends rather than to strangers. Thus, it may be that friend–friend corrections are more common than stranger corrections in face-to-face settings, and thus acceptance of facts is more common than social media and laboratory studies might lead us to conclude.” On the other hand, friends may be “reluctant to correct one another” face to face.

Friendships across ideological divides seem to be key to stopping the flow of misinformation. “If we want to get others to take certain facts more seriously, we have to form personal bonds with them, rather than simply trying to find a way for an algorithm to change what flows across their screen,” Margolin said in a release.

Researchers from the EU’s REVEAL Project looked at how both “young journalists” and social media users perceive factchecking and verification services. Most interesting to me were the individual interviews. Here’s one 23-year-old who seems to have learned in school never to trust Wikipedia:

Wikipedia is widely used, but as a journalist, I can’t use it. I never use Wikipedia as a source. It is too much work to verify the different statements on Wikipedia, where they come from. The same counts for Snopes. I would rather make a phone call to an expert and use her as a source.

The paper also includes a number of anti-Snopes comments from social media users.

Snopes IS? A husband and wife, without any scientific background, without any investigatory experience. They get their info from Google. They are a joke.

Conference calls about misinformation! The Factual Democracy Project mentioned here is funded. The first call is on Tuesday, Sept. 12 at 1 p.m. ET, and will “explore the various forms of computational propaganda and the ways our social media feeds are manipulated to amplify extremism and sway public opinion.”

Want EVEN MORE academic papers? I made a Twitter list of people who study this stuff for a living. It’s in progress, and I’d welcome suggestions on who to add.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Sept. 8, 2017, 9:16 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
With Hurricane Milton looming, NPR stations got a lower-bandwidth way to reach residents
In normal times, text-only websites are a niche interest. But a natural disaster is not normal times.
How a 19th-century news revolution sparked activists, influencers, disinformation, and the Civil War
Long before anyone was accused of being “woke,” the Wide Awakes used new news technology to rapidly construct a national movement.
How The New York Times incorporates editorial judgment in algorithms to curate its home page
The Times’ algorithmic recommendations team on responding to reader feedback, newsroom concerns, and technical hurdles.