Nieman Foundation at Harvard
HOME
          
LATEST STORY
Two-thirds of news influencers are men — and most have never worked for a news organization
ABOUT                    SUBSCRIBE
Feb. 5, 2021, 12:25 p.m.
Audience & Social

When’s the best time to correct fake news? After someone’s already read it, apparently

Plus: A thorough report on why social media is not biased against conservatives, and TikTok takes new steps to reduce the spread of unverified videos.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Debunking > prebunking. If you want someone to not believe that false or misleading headline they just read, when’s the best time to correct it? We hear a lot about inoculating people against fake news or “prebunking” it, but new research shows that the best time to fact-check a false headline — and have subjects remember the fact-check a week later — is after the subject has already read the headline.

Nadia Brashear, Gordon Pennycook, Adam J. Berinsky, and David G. Rand write in PNAS:

[The] key challenge is making corrections memorable. Debunking was more effective than labeling, emphasizing the power of feedback in boosting memory. […]

Ideally, people would not see misinformation in the first place, since even a single exposure to a fake headline makes it seem truer. Moreover, professional fact-checkers only flag a small fraction of false content, but tagging some stories as “false” might lead readers to assume that unlabeled stories are accurate (implied truth effect; ref. 21). These practical limitations notwithstanding, our results emphasize the surprising value of debunking fake news after exposure, with important implications for the fight against misinformation.

The next time someone claims social media is biased against conservatives… Show them this new report from NYU! Surely that will work. Anyway, this new report from NYU systematically dismantles the claim that social media is biased against conservatives. It provides an overview of the claims and who’s made them, looks at available data ‘showing that conservatives enjoy a prominent place on major social media platforms — a situation unlikely to be true if conservatives were being systematically suppressed,” and offers a series of recommendations for the platforms and the Biden administration as they deal with such claims.

Here are a couple of the recommendations, which certainly won’t stop conservative media figures from making specious claims about censorship but might be good ideas anyway.

Provide greater disclosure for content moderation actions. The platforms should give an easily understood explanation every time they sanction a post or account, as well as a readily available means to appeal enforcement actions. Greater transparency — such as that which Twitter and Facebook offered when they took action against former President Trump in January — would help to defuse claims of political bias, while clarifying the boundaries of acceptable user conduct.

Typically, platforms don’t provide much justification for why a given post or account is sanctioned. What’s more, obscure rules sometimes produce perplexing results. Left in the dark, some users and onlookers assume the worst, including ideological censorship. In 2020, conservatives protested when Twitter flagged Trump for glorifying violence but let stand without comment tweets by Ayatollah Ali Khamenei, Iran’s supreme leader, threatening Israel with annihilation. Under outside pressure, Twitter eventually explained that Khamenei’s menacing declarations fell under an exception permitting world leaders to engage in “saber rattling.” This episode didn’t show anti-conservative animus, but it did point to a need for Twitter to rethink its rules for world leaders and how it publicly explains application of those and other rules.

Offer users a choice among content moderation algorithms. To enhance user agency, platforms should offer a menu of choices among algorithms. Under this system, each user would be given the option of retaining the existing moderation algorithm or choosing one that screens out harmful content more vigorously. The latter option also would provide enhanced engagement by human moderators operating under more restrictive policies. If users had the ability to select from among several systems, they would be empowered to choose an algorithm that more closely reflects their values and preferences. There would be another potential benefit, as well: By revealing at least some of the ways that currently secret algorithms work, this approach could give users a partial peek inside the “black box” of social media, alleviating concerns about hidden platform prejudices.

[Related: I’m being censored, and you can read, hear, and see me talk about it in the news, on the radio, and on TV.]

TikTok addresses unverifiable videos. TikTok, which does already work with fact-checkers, is adding new prompts to help prevent people from sharing misleading content. “A viewer will see a banner on a video if the content has been reviewed but cannot be conclusively validated,” the company announced, and they’ll be prompted and reminded the content hasn’t been verified before being able to share it. “When we tested this approach,” the company said in a blog post, “we saw viewers decrease the rate at which they shared videos by 24%, while likes on such unsubstantiated content also decreased by 7%.”

Photo of a clock by Ocean Ng on Unsplash.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura@niemanlab.org) or Bluesky DM.
POSTED     Feb. 5, 2021, 12:25 p.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Two-thirds of news influencers are men — and most have never worked for a news organization
A new Pew Research Center report also found nearly 40% of U.S. adults under 30 regularly get news from news influencers.
The Onion adds a new layer, buying Alex Jones’ Infowars and turning it into a parody of itself
One variety of “fake news” is taking possession of a far more insidious one.
The Guardian won’t post on X anymore — but isn’t deleting its accounts there, at least for now
Guardian reporters may still use X for newsgathering, the company said.