Nieman Foundation at Harvard
A paywall? Not NPR’s style. A new pop-up asks for donations anyway
ABOUT                    SUBSCRIBE
Sept. 15, 2017, 8:57 a.m.
Audience & Social

You could change your mind. Or maybe (comforting thought!) you could just let Facebook do it for you

Plus: “The year’s most consequential storylines have collided,” the differences between “observational” and direct correction, and one more trip to Macedonia.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

All the cool academics are studying fake news. Rasmus Kleis Nielsen combed through the 150 papers being presented at this week’s Future of Journalism Conference at Cardiff University and found that 17 percent, the largest category (beyond other) focused on “post-truth, truth, and fake news”:

He notes that “there is relatively little audience research in general, and on this in particular — though Irene Costera Meijer and Tim Groot Kormelink had interesting data and analysis.” (Their paper is “What clicks actually mean: Exploring digital news user practices.” I’m always on the lookout for more audience research on fake news to include in this column!)

Two studies on changing people’s minds. First up, online in Psychology Science: “Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation.” Man-pui Sally Chan, Christopher R. Jones, Kathleen Hall Jamieson, and Dolores Albarracín (from the University of Illinois and Urbana-Champaign and the Annenberg Public Policy Center; Jamieson is the founder of looked at 20 previous experiments and came away with three recommendations:

Reduce arguments that support misinformation: News accounts about misinformation should not inadvertently repeat or belabor “detailed thoughts in support of the misinformation.”

Engage audiences in scrutiny and counterarguing of information: Educational institutions should promote a state of healthy skepticism. When trying to correct misinformation, it is beneficial to have the audience involved in generating counterarguments.

Introduce new information as part of the debunking message: People are less likely to accept debunking when the initial message is just labeled as wrong rather than countered with new evidence.

Second is “Using expert sources to correct health misinformation in social media,” published online in Science Communication by Emily K. Vraga (George Mason) and Leticia Bode (Georgetown). They write:

When individuals sees falsehoods touted as truth on social media, what should they do? If they correct them, will it matter? Or are organizations better suited to playing a corrective role in this space? And how is the credibility of either of these actors affected by the choice to intervene?

They exposed users to “a simulated Twitter feed that includes false information about the origins of the Zika virus” and tested how they reacted when exposed to corrections either from the CDC, from other unknown Twitter users, or both. (Let’s just hope that the CDC keeps, you know, putting information out there.) They found that seeing these corrections helped change beliefs — “Both users and organizations should speak up when they see misinformation on social media” — but note that this study is about observational correction:

In this study, we are not correcting the person who is posting the misinformation directly. Instead, our participants observe correction of misinformation that is posted on social media by unknown others. As such, it may avoid as strongly triggering the motivated reasoning processes that undermine efforts to correct misinformation (Lewandowsky et al., 2012; Nisbet et al., 2015; Nyhan & Reifler, 2010). Furthermore, it may be that the social fabric of platforms like Twitter or Facebook (Messing & Westwood, 2014) makes this type of observational correction easier than what occurs in other spaces, such as corrections within news articles. Future research should test whether such corrections remain effective when they are directed at known others or when they directly respond to an individual’s post. Yet, even if such corrective efforts are ineffective or alienate the person being targeted with the correction, they may have merit in mitigating misperceptions among the broader social media community.

It’s okay if people don’t believe Facebook’s factchecking stuff. That’s not the only reason to do it, writes BuzzFeed’s Craig Silverman: “The public’s reaction to the disputed label is largely irrelevant to stopping the spread of misinformation.” First, news stories that are tagged as “disputed” have their reach reduced by Facebook’s algorithm. Second:

These fact checked links are being added to what is fast becoming the world’s biggest and most up to date database of false stories. As with everything about Facebook, it’s the data and the algorithms that matter most.

With each new debunked story, the company gathers more data it can use to train its algorithms to make better decisions about which content to surface in the News Feed. This means the fact checkers are in effect working as content raters for Facebook in order to help train machines. Not surprisingly, this isn’t what motivates the fact checkers to do their work.

“I don’t want to sound like a Neanderthal but I’m not really focusing on it,” Aaron Sharockman, the executive director of PolitiFact, told BuzzFeed News. “For us, our biggest priorities are to make the tools we use to spot and fact check fake news as efficient as possible so we can cover as much ground and have an impact.”

Hey! Content rater! Get back to work.

“Sleepy riverside town in Macedonia.” This is becoming its own genre of fake news story (which is somewhat problematic), but CNN Money went to Veles, Macedonia, to see how it’s “gearing up” for the U.S. presidential election 2020. “That is your way of producing really quick money. We don’t even try to stop them,” Veles’ mayor said of the fake news creators.

“The year’s most consequential storylines have collided.” Special counsel Robert Mueller is investigating “Russia’s effort to influence U.S. voters through Facebook and other social media,” reports Chris Strohm for Bloomberg. “Mueller’s team of prosecutors and FBI agents is zeroing in on how Russia spread fake and damaging information through social media and is seeking additional evidence from companies like Facebook and Twitter about what happened on their networks.”

“The year’s most consequential storylines have collided,” writes Mike Allen in Axios. “Mueller’s investigation, based on people he’s interviewing and questions he’s asking, could very well expose in vivid detail not only Russia’s influence in the election, and sketchy if not illegal behavior by Trump associates, but also how Facebook, Twitter and social media helped facilitate a lot of it.”

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

POSTED     Sept. 15, 2017, 8:57 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
A paywall? Not NPR’s style. A new pop-up asks for donations anyway
“I find it counterproductive to take a cynical view on tactics that help keep high-quality journalism freely accessible to all Americans.”
The story of InterNation, (maybe) the world’s first investigative journalism network
Long before the Panama Papers and other high-profile international projects, a global network of investigative journalists collaborated over snail mail.
Want to boost local news subscriptions? Giving your readers a say in story ideas can help
“By providing a service that answers questions posed by audience members, audiences are more likely to reciprocate through subscriptions.”