Nieman Foundation at Harvard
HOME
          
LATEST STORY
We are responsible for how we use our power
ABOUT                    SUBSCRIBE
Oct. 27, 2017, 8:39 a.m.
Audience & Social

When fake news is funny (or “funny”), is it harder to get people to stop sharing it?

Plus: Platforms scramble to do something about shady political ads before Congressional hearings start, and is fake news better thought of as “disinformation advertising”?

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Psychology says: Facebook factchecks don’t work very well. New research from Yale’s Gordon Pennycook and David Rand (they of the memorable “bullshit receptivity” research) found that Facebook’s “disputed” labels are likely to backfire in many cases. Brendan Nyhan writes up some of their recent work for The New York Times and brings in his own Dartmouth research:

Mr. Pennycook and Mr. Rand find that the presence of “disputed” labels causes study participants to rate unlabeled false stories as slightly more accurate — an “implied truth” effect. If Facebook is seen as taking responsibility for the accuracy of information in its news feed through labeling, readers could start assuming that unlabeled stories have survived scrutiny from fact checkers (which is rarely correct — there are far too many for humans to check everything).

Encouragingly, my students at Dartmouth College and I find that the effects of Facebook-style “disputed” banners on the perceived accuracy of false headlines are larger than those Mr. Pennycook and Mr. Rand observed. The proportion of respondents rating a false headline as “somewhat” or “very accurate” in our study decreased to 19 percent with the standard Facebook “disputed” banner, from 29 percent in the unlabeled condition. It goes down even further, to 16 percent, when the warning instead states that the headline is “rated false.” (We find no evidence of a large “implied truth” effect, though we lack the statistical precision necessary to detect effects of the size Mr. Pennycook and Mr. Rand measure.)

Speaking at Harvard this week to a group of mostly psychology peeps, Pennycook said he thinks that most people simply don’t consider accuracy before they share news. In his and Rand’s research, they found that simply asking people a question about accuracy before they considered which fake news articles they would share — subjects were asked to rate the accuracy of one politically neutral story — led to a reduction in fake news sharing among both Clinton and Trump supporters, and even among people who said they didn’t consider accuracy to be an important prerequisite for sharing.

This should be a pretty easy social media intervention, Pennycook said: It doesn’t rely on fact checkers, it doesn’t require anyone to “centralize the definition of ‘true,'” and it’s essentially free. But Facebook would have to roll it out in some palatable way, possibly by Facebook randomly presenting the question as a pop-up or in the News Feed and saying that it was to make Facebook’s rankings better (or something) rather than saying it was an intervention explicitly designed to stop the sharing of fake news.

Fiery Cushman, associate professor of psychology at Harvard and head of the Moral Psychology Research Lab, pointed out that people may be sharing fake news simply because they find it funny whether or not it is accurate: The room chuckled for instance, when Pennycook pulled up a screenshot of a fake news story claiming that Mike Pence credited gay conversion therapy with saving his marriage. Is there a way to get people not to share fake news that they think is funny, Cushman wondered — and would that be a different kind of intervention from getting them not to share fake news because it’s not accurate?

A case for the U.S. government to regulate fake news.Fake news is native advertising, or ‘disinformation advertising.’ Despite strong First Amendment protection of political speech, government can (and should) act.” That’s Abby Wood, associate professor of law, political science, and public policy at the University of Southern California. In a paper, Wood, along with Ann Ravel and Irina Dykhne, have released proposed regulations in a whitepaper. Some of these, such as a central ad archive, are similar to the guidelines suggested by the bipartisan Honest Ads Act legislation introduced last week.

Platforms: “Wait, we’re doing something, we promise.” Twitter announced that it will start labeling all political ads, including who bought them and how they’re targeted, and will “launch an industry-leading transparency center that will offer everyone visibility into who is advertising on Twitter, details behind those ads, and tools to share your feedback with us.”

Facebook, in September, had said that it would make it easier for users to see who is behind political ads.

This all comes ahead of the November 1 congressional hearings that will bring Facebook, Twitter, and Google reps in front of the Senate and House intelligence committees to talk about the role they played in spreading misinformation during the 2016 election. Here are the companies’ general counsels who will appear, if you care. (It won’t be Mark Zuckerberg or Jack Dorsey getting grilled.)

Issie Lapowsky writes in Wired:

Providing more transparency around ads could help with some, but not all of the ways that Russia is believed to have meddled in the 2016 election. Russian-linked actors also posted fake new articles on Facebook, deployed carefully crafted hashtag campaigns that mobilized bots with misinformation on Twitter, used Google ads to help finance the whole operation. The campaign relied in large part on ordinary Americans, who believed in what they were reading and who chose to pass that information on to their social networks.

Cracking down on ads may make it harder for bad actors to instantly spread their messages far and wide. But it’s the viral spread of misleading or divisive information from anywhere in the world that these tech companies still need to address.

Twitter’s announcement is “no substitute for updating our laws and passing the Honest Ads Act…If Twitter is an advocate for this type of transparency and accountability, I look forward to its support of my bipartisan legislation,” U.S. Senator Amy Klobuchar (D-Minn.), the lead author of the Honest Ads Act, said in a statement to Wired.

Speaking of platforms doing things, Reddit shut down a bunch of Nazi and white supremacist boards.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

POSTED     Oct. 27, 2017, 8:39 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
SHARE THIS STORY
   
 
Join the 50,000 who get the freshest future-of-journalism news in our daily email.
We are responsible for how we use our power
“We must collaborate on rewriting the power dynamics between newsrooms and each other, our audiences and those we seek to hold accountable.”
Journalism becomes the escape
“Wouldn’t it be great if news organizations felt like something known, loved and trusted? If our neighbors retreated into work created by our organizations like they do into private texting groups or Netflix binges or that cozy corner booth at a favorite restaurant?”
We should listen to the kids (especially on Instagram)
“If an established, legacy newsroom hired 15, 23 year olds to run a vertical of their own, I’d read that. And maybe 23 year olds would, too.”