Nieman Foundation at Harvard
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Sept. 22, 2022, 9:22 a.m.
Audience & Social

U.S. politicians tweet much more misinformation than those in the U.K. and Germany

“We also found systematic differences between the parties in the U.S., where Republican politicians were found to share untrustworthy websites more than nine times as often as Democratic politicians.”

Building on earlier work that showed how former U.S. president Donald Trump could set the political agenda using Twitter, we conducted a systematic examination of the accuracy of the tweets of politicians in three countries: the U.S., the U.K., and Germany.

Along with colleagues David Garcia, Fabio Carrella, Almog Simchon, and Segun Aroyehun, we collected all available tweets from former and current members of the U.S. Congress, the German parliament, and the British parliament. Combined, we collected more than 3 million tweets posted from 2016 to 2022.

Politicians from mainstream parties in the U.K. and Germany post few links to untrustworthy websites on Twitter, and this has remained constant since 2016, according to our new research. By contrast, U.S. politicians post a much higher percentage of untrustworthy content in their tweets, and that share has been increasing steeply since 2020.

We also found systematic differences between the parties in the U.S., where Republican politicians were found to share untrustworthy websites more than nine times as often as Democratic politicians.

For Republican politicians, overall around 4% (one in 25) links came from untrustworthy sites, compared with around 0.4% (one in 250) among Democratic politicians. That gap has widened in the last few years. Since 2020, more than 5% of tweets from Republican members of Congress contained links to untrustworthy information. Democratic politicians predominantly share information that is trustworthy, we found.

Over the five-year period we studied, mainstream elected U.K. members of parliament shared only 74 links to misinformation (0.01% of all their tweets), compared with 4,789 (1.8%) from elected mainstream U.S. politicians and 812 (1.3%) from German politicians.

To determine the trustworthiness of information shared by the politicians, we extracted all links to external websites contained in the tweets and then used the NewsGuard database to assess the trustworthiness of the domain being linked to. NewsGuard curates a large number of sites in numerous different countries and languages and evaluates them along nine criteria that characterize responsible journalism — for example, whether a site publishes corrections and whether it differentiates between opinion and news.

Our team looked at members of parliament from the U.K.’s Conservative and Labour parties and from Germany (Greens, SPD, FDP, CDU/CSU), as well as U.S. members of Congress.

Members of the conservative parties in Germany (CDU/CSU) and the U.K. (Conservatives) shared links to untrustworthy websites more frequently than their counterparts in the center or center-left. However, even conservative parliamentarians in Europe were more accurate than U.S. Democrats, with only around 0.2% (one in 500) links from European conservatives being untrustworthy.

We repeated our analyses using a second database of news website trustworthiness instead of NewsGuard. This robustness check was important to minimize the risk of possible partisan bias in what is considered “untrustworthy.”

The second database was compiled by academics and fact checkers such as Media Bias/Fact Check. Reassuringly, the results matched our primary analyses and we found the same trends.

The world has been awash with concern about the state of our political discourse for many years now. There is ample justification for this concern, given that 30% to 40% of Americans believe the baseless claim that the presidential election of 2020 was “stolen” by President Biden, and that around 10% of the British public believes in at least one conspiracy theory surrounding Covid-19.

Much of the discussion of the misinformation problem — and much of the blame — has focused on social media, and in particular the algorithms that curate our news feeds and that may nudge us toward more extreme and outrage-provoking content. There is now considerable evidence that social media has been harmful to democracy in at least some countries.

However, social media is not the only source of the misinformation problem. Donald Trump made more than 30,000 false or misleading claims during his presidency and there are political leaders in Europe who have a poor track record.

However, compared with the plethora of research that has focused on the role of social media, and the relationship between technology and democracy more generally, there have been few attempts to systematically characterize the role of political leaders in the dissemination of low-quality information.

Our results are interesting in light of several recent analyses of the American public’s news diet, which have repeatedly shown that conservatives are more likely to encounter and share untrustworthy information than liberals. To date, the origins of that difference have remained disputed.

Our results contribute to a potential explanation if we assume that what politicians say sets the agenda and resonates with members of the public. By sharing misinformation, Republican members of Congress not only directly provide misinformation to their followers, but also legitimize the sharing of untrustworthy information more generally.

Stephan Lewandowsky is chair of cognitive psychology at the University of Bristol in the U.K. Jana Lasser is a postdoc researcher at Graz University of Technology in Austria. This article is republished from The Conversation under a Creative Commons license.The Conversation

POSTED     Sept. 22, 2022, 9:22 a.m.
SEE MORE ON Audience & Social
Show tags
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”