Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
May 19, 2021, 11:16 a.m.

Someone *wrong* on the internet? Correcting them publicly may make them act like a bigger jerk

After getting replies that debunked false political news they’d shared, users were more likely to share low-quality news.

You see a bit of fake news on Twitter. Should you debunk it? Why not, right?

Fact-checkers and researchers have looked at the impact of debunking on the belief in the false claim — and have found little evidence that issuing a correction could backfire, though debates continue. A new paper from Mohsen Mosleh, Cameron Martel, Dean Eckles, and David Rand, however, takes a look at the effect of debunking on subsequent behavior.

Would being publicly corrected reduce a user’s tendency to share fake news? Maybe prompt them to tweet with a little more civility? The results were not encouraging.

In the 24 hours after being corrected, Twitter users who received a reply debunking a claim made in one of their posts posted more content from disreputable sources. There was also a significant uptick in the partisan slant and toxicity of their subsequent posts.

Here’s how the field experiment worked. The researchers created a fleet of human-impersonating bots and waited until each account had amassed 1,000 followers and was, at least, three months old. Then, the accounts began to issue corrections by dropping Snopes links in replies to tweets with false information. (The bots were literally reply guys; the bots were all styled as white men “since a majority of our subjects were also white men.”)

All told, about 1,500 debunking replies were made.

Some of the fake news targeted for correction? “A photograph of U.S. President Donald Trump in his Trump Tower office in 2016 with several boxes of Sudafed in the background provides credible evidence of stimulant abuse” and “Virginia Gov. Ralph Northam said the National Guard would cut power and communications before killing anyone who didn’t comply with new gun legislation.” Both claims have been debunked by Snopes.

The debunking was public, but fairly gentle. (“I’m uncertain about this article — it might not be true. I found a link on Snopes that says this headline is false.”) The replies also came late. Corrections were delivered, on average, 81 days after the original post.

For 24 hours after the public correction, users shared more news from sources identified by professional fact-checkers as low-quality. The decrease in news quality was small — like 1 to 1.4% small — but statistically significant. Being corrected also increased the partisan slant in subsequent tweets and significantly increased “language toxicity.”

Researchers found that retweeted content, in particular, suffered. The negative effects of a public debunking were less prominent in “primary tweets,” those composed by the users, rather than those merely shared or retweeted without comment.

The results were surprising. A previous experiment had found that nudging Twitter users to consider the accuracy of a headline improved the quality of the news they shared.

So what gives? The researchers have a few theories. Because the effects were stronger for retweeted material — as compared to primary tweets — the authors suggest that users just weren’t paying as close attention to content they merely shared.

The method of debunking — a public reply, rather than a private message sent via DM — may have played a role, too. Being called out on a specific tweet may have prompted a more emotional response than a subtle nudge about accuracy more generally. Here’s what the researchers suggest the difference is between the two field experiments on Twitter:

A private message asking users to consider the accuracy of a benign (politically neutral) third-party post, sent from an account that explicitly identified itself as a bot, increased the quality of subsequently retweeted news links; and further survey experiments support the interpretation that this is the result of attention being directed towards the concept of accuracy. This is in stark contrast to the results that we observe here. It seems likely that the key difference in our setup is that being publicly corrected by another user about one’s own past post is a much more emotional, confrontational, and social interaction than the subtle accuracy prime.

The public nature of this more recent experiment, the researchers argue, could have shifted the users’ attention to social dynamics like embarrassment, indignation over self-expression or partisanship, and their relationship with the “person” issuing the correction. In the battle for users’ attention, the social considerations won.

Twitter has experimented with prompting users to read articles before sharing and to reconsider replying with hostile language. There’s more research to be done, but this experiment suggests public corrections may not be as effective as other nudges toward accuracy and civility.

“Overall, our findings raise questions about potentially serious limits on the overall effectiveness of social corrections,” the researchers conclude. “Before social media companies encourage users to correct misinformation that they observe on-platform, detailed quantitative work and normative refection is needed to determine whether such behavior is indeed overall beneficial.”

Photo by Claudio Schwarz used under a Creative Commons license.

Sarah Scire is deputy editor of Nieman Lab. You can reach her via email (sarah_scire@harvard.edu), Twitter DM (@SarahScire), or Signal (+1 617-299-1821).
POSTED     May 19, 2021, 11:16 a.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”