The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
Presenting corrections after and during exposure to false headlines decreased belief one week later. While all three treatments increased belief in true headlines one week later, supplying corrections after exposure was most effective. 4/10 pic.twitter.com/pFqNQvbM9O
— Nadia Brashier (@nadiabrashier) January 26, 2021
Nadia Brashear, Gordon Pennycook, Adam J. Berinsky, and David G. Rand write in PNAS:
[The] key challenge is making corrections memorable. Debunking was more effective than labeling, emphasizing the power of feedback in boosting memory. […]Ideally, people would not see misinformation in the first place, since even a single exposure to a fake headline makes it seem truer. Moreover, professional fact-checkers only flag a small fraction of false content, but tagging some stories as “false” might lead readers to assume that unlabeled stories are accurate (implied truth effect; ref. 21). These practical limitations notwithstanding, our results emphasize the surprising value of debunking fake news after exposure, with important implications for the fight against misinformation.
The next time someone claims social media is biased against conservatives… Show them this new report from NYU! Surely that will work. Anyway, this new report from NYU systematically dismantles the claim that social media is biased against conservatives. It provides an overview of the claims and who’s made them, looks at available data ‘showing that conservatives enjoy a prominent place on major social media platforms — a situation unlikely to be true if conservatives were being systematically suppressed,” and offers a series of recommendations for the platforms and the Biden administration as they deal with such claims.
Here are a couple of the recommendations, which certainly won’t stop conservative media figures from making specious claims about censorship but might be good ideas anyway.
Provide greater disclosure for content moderation actions. The platforms should give an easily understood explanation every time they sanction a post or account, as well as a readily available means to appeal enforcement actions. Greater transparency — such as that which Twitter and Facebook offered when they took action against former President Trump in January — would help to defuse claims of political bias, while clarifying the boundaries of acceptable user conduct.Typically, platforms don’t provide much justification for why a given post or account is sanctioned. What’s more, obscure rules sometimes produce perplexing results. Left in the dark, some users and onlookers assume the worst, including ideological censorship. In 2020, conservatives protested when Twitter flagged Trump for glorifying violence but let stand without comment tweets by Ayatollah Ali Khamenei, Iran’s supreme leader, threatening Israel with annihilation. Under outside pressure, Twitter eventually explained that Khamenei’s menacing declarations fell under an exception permitting world leaders to engage in “saber rattling.” This episode didn’t show anti-conservative animus, but it did point to a need for Twitter to rethink its rules for world leaders and how it publicly explains application of those and other rules.
Offer users a choice among content moderation algorithms. To enhance user agency, platforms should offer a menu of choices among algorithms. Under this system, each user would be given the option of retaining the existing moderation algorithm or choosing one that screens out harmful content more vigorously. The latter option also would provide enhanced engagement by human moderators operating under more restrictive policies. If users had the ability to select from among several systems, they would be empowered to choose an algorithm that more closely reflects their values and preferences. There would be another potential benefit, as well: By revealing at least some of the ways that currently secret algorithms work, this approach could give users a partial peek inside the “black box” of social media, alleviating concerns about hidden platform prejudices.
[Related: I’m being censored, and you can read, hear, and see me talk about it in the news, on the radio, and on TV.]
TikTok addresses unverifiable videos. TikTok, which does already work with fact-checkers, is adding new prompts to help prevent people from sharing misleading content. “A viewer will see a banner on a video if the content has been reviewed but cannot be conclusively validated,” the company announced, and they’ll be prompted and reminded the content hasn’t been verified before being able to share it. “When we tested this approach,” the company said in a blog post, “we saw viewers decrease the rate at which they shared videos by 24%, while likes on such unsubstantiated content also decreased by 7%.”