The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
Theoretically, our findings shed new light on the perspective that inattention plays an important role in the sharing of misinformation online. By demonstrating the role of inattention in the context of Covid-19 misinformation (rather than politics), our results suggest that partisanship is not, apparently, the key factor distracting people from considering accuracy on social media. Instead, the tendency to be distracted from accuracy on social media seems more general. Thus, it seems likely that people are being distracted from accuracy by more fundamental aspects of the social media context. For example, social media platforms provide immediate, quantified feedback on the level of approval from one’s social connections (e.g., “likes” on Facebook). Thus, attention may by default be focused on other factors, such as concerns about social validation and reinforcement (e.g., Brady, Crockett, & Van Bavel, 2020; Crockett, 2017) rather than accuracy. Another possibility is that because news content is intermixed with content in which accuracy is not relevant (e.g., baby photos, animal videos), people may habituate to a lower level of accuracy consideration when in the social media context. The finding that people are inattentive to accuracy even when making judgments about sharing content related to a global pandemic raises important questions about the nature of the social media ecosystem.
We think that the social media context may distract people from thinking about accuracy. If so, even subtle nudges (or "primes", choose your buzzword) that remind people about accuracy should improve the quality of news content that people share on social media
— Gordon Pennycook (@GordPennycook) June 8, 2020
I.e., the extent to which people share relatively more true than false COVID-19 content increases if they are subtly prompted to consider accuracy. (Indeed, the difference between true and false more than doubled.)
— Gordon Pennycook (@GordPennycook) June 8, 2020
We need to change the way that people interact with social media. But, until that big change happens, we can at least remind people to consider whether something is true before they share it. We’re working now on trying to optimize this (e.g., to avoid “banner blindness”).
— Gordon Pennycook (@GordPennycook) June 8, 2020
A lot of wasted time. First Draft took a look at misinformation around the protests inspired by the killing of George Floyd. One thing they found: Wasted time by news outlets investigating crap.
After all of our social media monitoring during the protests, it is not possible to blame the “outside agitator” narrative on one bad actor. Our analysis is still ongoing, but as with any moment of shared online attention, bots and sock puppet accounts were very likely to have been pushing out content related to those narratives of protest infiltration. And journalistic mistakes were made: There are examples of outlets poorly framing or mis-contextualizing rumors, giving “outside actors” more legitimacy than the evidence indicated. But identifying insidious networks and media missteps is futile without a simultaneous examination of how our current information landscape is so easily influenced by these disturbances.
Social media platforms and their algorithms, editorial decision making, and determinations about what to post and share on an individual level all contribute to the visibility of certain narratives, and they work unintentionally in synchrony — often to undesirable results. For example, news outlets used valuable resources investigating a 75-year-old police brutality victim’s ties — or lack thereof — to “antifa,” thanks in large part to the promotion of this false rumor by President Donald Trump. This is just one example from the protests when news outlets spent many hours having to investigate and debunk claims from politicians, police authorities and video evidence from the streets. When newsrooms, particularly local newsrooms where staff are being laid off and furloughed, are focused on this type of work, they are less able to focus on stories that reflect the experiences and needs of their communities. And yet it is difficult to argue that topics exploding on social media are not newsworthy. The feedback loop between social media and traditional media is broken, and the protests exhibit how damaging that has become.
An analysis of the most-shared Covid-19 misinformation in Europe. Fact-checking organizations (AFP, Correctiv, Pagella Politica/Facta, Maldita.es, and Full Fact) across five European countries analyzed which false Covid-19 stories they’ve spent the most effort debunking. The result is an interesting report with some cool graphics.
These were few of the most common misinformation themes:
1. Cures and remedies: “Perhaps the most consistent topic of misinformation was misleading medical advice around supposed cures or remedies for Covid-19.”
2. 5G misinformation:
The belief that Covid-19 is caused (or made worse, or helped to spread) by 5G cellphone
technology was common across all five countries, although it was especially common in Italy and the UK. The claims varied quite a lot, both between countries and within them, from general claims that 5G was behind the disease (examples can be seen in Spain, France, Italy, Germany, and the UK) to specific claims around a video of a phone mast being destroyed (which was seen in Italy and Germany) and broader claims that fold 5G in with other conspiracy theories (as seen in France, Spain, and Italy).
3. Avoiding or preventing infection:
Related to, but distinct from, medical advice around cures and remedies for people who may have been infected, is medical advice on how to avoid infection in the first place. These often took the form of lengthy lists of advice that blended accurate or partially accurate information with unsound medical advice — very similar lists were seen in Spain and France, while another different list appeared in the UK. One common theme among many of these was that warm temperatures would kill the virus, as also seen in Germany and Italy. This advice was often falsely attributed to some authority, for example Johns Hopkins University, Unicef, or simply medical professionals.
Interestingly, there were also categories of misinformation that spread widely in some countries, but not others:
Some themes that were especially common in one country were notable by the fact that they were either entirely absent or far less common in all other countries. The UK had a large amount of unverified claims about pets and how the outbreak was affecting them. Germany saw several claims about migrants, including that they were secretly being allowed into the country under the cover of lockdown. Claims of chemically-impregnated masks being used by robbers to incapacitate their victims were seen in Spain and Germany, but not widely shared elsewhere. And Spain saw a large number of scams and hoaxes related to technology.
It’s interesting to note that in Spain false claims circulated linked with the fact that users’ WhatsApp activity would somehow be monitored or censored. The misinformation could have easily reached the other countries, since WhatsApp is commonly used there as well. However, this didn’t happen, and the other countries didn’t deal with this topic.