Nieman Foundation at Harvard
HOME
          
LATEST STORY
Back to the bundle
ABOUT                    SUBSCRIBE
May 10, 2019, 8:46 a.m.
Audience & Social

Black female gun owners, moderate Republicans, and Jewish Americans are among the groups that may be particular targets of misinformation in 2020

Plus: “Passive misinformation” is a problem for The Hill and other mainstream media outlets, and a closer look at some of the research projects Facebook is opening up to.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“Passive misinformation” is a problem for The Hill and other mainstream media outlets. The liberal Media Matters did a study of how news organizations handle misleading claims and lies from Trump. “Passive misinformation is a problem for outlets across the board,” they found after a review of “more than 54,000 tweets sent between 12 a.m. EST on January 26 and 12 a.m. EST on February 16 from the following Twitter feeds of U.S. wire services; major broadcast, cable, and radio networks; national newspapers; and Capitol Hill newspapers and digital outlets that cover Congress and the White House…The sample of more than 54,000 tweets was then narrowed down to a sample of about 2,000 tweets referencing comments Trump made.

Media outlets put a great deal of focus on Trump’s comments — roughly one out of every five tweets mentioning Trump was about a particular quote. We found that that content strategy leaves outlets vulnerable to passing on the president’s misinformation, as 30% of those Trump quotes contained a false or misleading claim.

News outlets can report on Trump’s falsehoods without misleading their audience if they take the time to fact-check his statements within the body of their tweets. But we found that that isn’t happening consistently — in nearly two-thirds of tweets referencing false or misleading Trump claims, the media outlets did not dispute Trump’s misinformation.

All told, the Twitter feeds we studied promoted false or misleading Trump claims without disputing them in 407 tweets over a three-week period — an average of 19 undisputed false claims published each day.

The worst offender? The Hill.

The Twitter feed of The Hill, which has 3.25 million followers, was by far the worst offender we reviewed, producing more than 40 percent of the tweets that pushed Trump’s misinformation without context over the entire study. It promoted Trump’s falsehoods without disputing them 175 times — an average of more than eight per day. These numbers are so high in part because the outlet tweets about Trump far more frequently than other outlets, generating about a quarter of the total data. That high volume led to the outlet tweeting about false or misleading Trump claims 200 times. The feed rarely disputes the Trump claims it tweets about, instead simply passing along the misinformation 88 percent of the time. The Hill also frequently resends the same tweet at regular intervals, not only amplifying his falsehoods, but also making it more likely that the misinformation will stick with its audience through the power of repetition.

“Social media and making extreme news have evolved in tandem over the last 10 or 20 years.” The Verge’s Jacob Kastrenakes took a closer look at the 12 research projects that are getting access to Facebook data. Here are a couple:

A project led by R. Kelly Garrett, an associate professor at Ohio State University, will look at whether there are predictable patterns that lead to sharing fake and dubious news stories. Facebook’s data, Garrett says, will provide things that traditional methods of data gathering can’t offer. “People can’t reliably tell you, ‘I usually share stuff I haven’t bothered reading in middle of the night, in spring, on weekends,’” he says. “People don’t know or have incentives not to tell you the truth.” Garrett hopes to identify patterns that track across social media networks, which could help online platforms make changes to discourage the sharing of fake stories.

Several research groups will also take advantage of the ability to study sharing behaviors on Facebook before and after an algorithm change designed to promote friends over media sources. “What’s tricky is that both social media and making extreme news have evolved in tandem over the last 10 or 20 years,” says Nicholas Beauchamp, an assistant professor at Northeastern University, who’s leading a research group that’s studying how peer sharing affects the polarization of news. His group will look across the algorithm change to see whether peer sharing changes the rates of fake news. “We have this nice little kind of natural experiment,” he says, “where suddenly there’s this unexpected shift towards much more peer sourced information.”

“A spiral of silence.” Institute for the Future (IFTF) released a report and eight case studies about how particular social and issue-focused groups in the U.S. — Muslim Americans, Latinos, moderate Republicans, black women gun owners, environmental activists, pro-choice and pro-life campaigners, immigration activists; and Jewish Americans — were the targets of misinformation (on Twitter) during the 2018 midterms and likely will be again in 2020. The research was first written up by BuzzFeed. Samuel Woolley, the director of IFTF’s Digital Intelligence Lab, told Craig Silverman and Jane Lytvynenko: “We think that the general goal of this [activity] is to create a spiral of silence to prevent people from participating in politics online, or to prevent them from using these platforms to organize or communicate.”

Spiral of silence theory is the idea, first proposed by the German political scientist Elisabeth Noelle-Neumann in 1974, that people who fear their positions are unpopular will choose not to voice them in order not to face social isolation. Her original conception focused primarily on unpopular opinions and their portrayal in mass media; with the rise of social media, it has also been applied to generally accepted beliefs that can prompt harassment from a small but aggressive group.

Here are the researchers’ main findings:

1) Human social media users, not bots, produced the majority of harassment — but bots continue to be used to seed and promote coordinated disinformation narratives;
(2) Adversarial groups are co-opting images, videos, hashtags, and information previously used or generated by social and issue-focused groups—and then repurposing this content in order to camouflage disinformation and harassment campaigns;
(3) Disinformation campaigns utilize age-old stereotypes and conspiracies — often attempting to foment both intra-group polarization and external arguments with other groups; and
(4) Social media companies’ responses to curb targeted harassment and disinformation campaigns have not served to effectively protect the groups studied here from such content.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura@niemanlab.org) or Bluesky DM.
POSTED     May 10, 2019, 8:46 a.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Back to the bundle
“If media companies can’t figure out how to be the bundlers, other layers of the ecosystem — telecoms, devices, social platforms — will.”
Religious-sounding language will be everywhere in 2025
“A great deal of language that looks a lot like Christian Nationalism isn’t actually calling for theocracy; it is secular minoritarianism pushed by secular people, often linked to rightwing cable and other media with zero meaningful ties to the church or theological principle.”
Newsroom planning goes silo-free
“The key is inviting everyone who touches a story from beginning to end to be a part of the conversation.”