Nieman Foundation at Harvard
HOME
          
LATEST STORY
Yes, deepfakes can make people believe in misinformation — but no more than less-hyped ways of lying
ABOUT                    SUBSCRIBE
Jan. 5, 2021, 8:39 a.m.
Audience & Social

How to reduce the spread of fake news — by doing nothing

By arguing with a message, you are spreading it further. This matters, because if more people see it, or see it more often, it will have an even greater effect.

When we come across false information on social media, it is only natural to feel the need to call it out or argue with it. But my research suggests this might do more harm than good. It might seem counterintuitive, but the best way to react to fake news — and reduce its impact — may be to do nothing at all.

False information on social media is a big problem. A UK parliament committee said online misinformation was a threat to “the very fabric of our democracy.” It can exploit and exacerbate divisions in society. There are many examples of it leading to social unrest and inciting violence, for example in Myanmar and the United States.

It has often been used to try to influence political processes. One recent report found evidence of organized social media manipulation campaigns in 48 different countries, including the United States and United Kingdom.

Social media users also regularly encounter harmful misinformation about vaccines and virus outbreaks. This is particularly important with the roll-out of Covid-19 vaccines because the spread of false information online may discourage people from getting vaccinated — making it a life or death matter.

With all these very serious consequences in mind, it can be very tempting to comment on false information when it’s posted online — pointing out that it is untrue, or that we disagree with it. Why would that be a bad thing?

Increasing visibility

The simple fact is that engaging with false information increases the likelihood that other people will see it. If people comment on it, or quote tweet — even to disagree — it means that the material will be shared to our own networks of social media friends and followers.

Any kind of interaction at all — whether clicking on the link or reacting with an angry face emoji — will make it more likely that the social media platform will show the material to other people. In this way, false information can spread far and fast. So even by arguing with a message, you are spreading it further. This matters, because if more people see it, or see it more often, it will have an even greater effect.

I recently completed a series of experiments with a total of 2,634 participants looking at why people share false material online. In these, people were shown examples of false information under different conditions and asked if they would be likely to share it. They were also asked about whether they had shared false information online in the past.

Some of the findings weren’t particularly surprising. For example, people were more likely to share things they thought were true or were consistent with their beliefs.

But two things stood out. The first was that some people had deliberately shared political information online that they knew at the time was untrue. There may be different reasons for doing this (trying to debunk it, for instance). The second thing that stood out was that people rated themselves as more likely to share material if they thought they had seen it before. The implication is that if you have seen things before, you are more likely to share when you see them again.

Dangerous repetition

It has been well established by numerous studies that the more often people see pieces of information, the more likely they are to think they are true. A common maxim of propaganda is that if you repeat a lie often enough, it becomes the truth.

This extends to false information online. A 2018 study found that when people repeatedly saw false headlines on social media, they rated them as being more accurate. This was even the case when the headlines were flagged as being disputed by fact checkers. Other research has shown that repeatedly encountering false information makes people think it is less unethical to spread it (even if they know it is not true, and don’t believe it).

So to reduce the effects of false information, people should try to reduce its visibility. Everyone should try to avoid spreading false messages. That means that social media companies should consider removing false information completely, rather than just attaching a warning label. And it means that the best thing individual social media users can do is not to engage with false information at all.

Tom Buchanan is a professor of psychology at the University of Westminster. This article is republished from The Conversation under a Creative Commons license.The Conversation

Depiction of a black hole by The European Southern Observatory used under a Creative Commons license.

POSTED     Jan. 5, 2021, 8:39 a.m.
SEE MORE ON Audience & Social
SHARE THIS STORY
   
Show comments  
Show tags
 
Join the 50,000 who get the freshest future-of-journalism news in our daily email.
Yes, deepfakes can make people believe in misinformation — but no more than less-hyped ways of lying
The reasons we get fooled by political lies are less about the technology behind their production and more about the mental processes that lead us to trust or mistrust, accept or discount, embrace or ignore.
Do you know the McMuffin man?
Capitol coverage, the problem with op-eds, and that Vogue cover.
Tiny News Collective aims to launch 500 new local news organizations in three years
At least half of the new newsrooms will be “based in communities that are unserved or underserved, run by founders who have historically been shut out.”