Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
Jan. 25, 2019, 9:37 a.m.
Audience & Social

Do people fall for fake news because they’re partisan or because they’re lazy? Researchers are divided

Plus: Real-life consequences after you get harassed online, watching your boyfriend become radicalized, and what is Fox News, exactly?

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Where the research splits. Here’s a helpful meta-analysis of the fake news analysis. For The New York Times, psychologists Gordon Pennycook and David Rand, who’ve done plenty of their own fake news research, write:

Much of the debate among researchers falls into two opposing camps. One group claims that our ability to reason is hijacked by our partisan convictions: that is, we’re prone to rationalization. The other group — to which the two of us belong — claims that the problem is that we often fail to exercise our critical faculties: that is, we’re mentally lazy.

However, recent research suggests a silver lining to the dispute: Both camps appear to be capturing an aspect of the problem. Once we understand how much of the problem is a result of rationalization and how much a result of laziness, and as we learn more about which factor plays a role in what types of situations, we’ll be better able to design policy solutions to help combat the problem.

“People who shared fake news were more likely to be older and more conservative.” Echoing other recent studies, researchers found that people who shared fake news on Twitter between August and December 2016 were likely to be older and more conservative, and were concentrated into a “seedy little neighborhood” on Twitter, according to Northeastern’s David Lazer — “Only 1 percent of individuals accounted for 80 percent of fake news source exposures, and 0.1 percent accounted for nearly 80 percent of fake news sources shared.”

The authors suggest a few ideas for reducing the spread of fake news — for example, limiting the number of political URLs that any one user can share in a day:

Platforms could algorithmically demote content from frequent posters or prioritize users who have not posted that day. For illustrative purposes, a simulation of capping political URLs at 20 per day resulted in a reduction of 32 percent of content from fake news sources while affecting only 1 percent of content posted by nonsupersharers.

By the way, the team also found that “incongruent sources,” i.e. sources that didn’t fit with a person’s political beliefs, “were shared at significantly lower rates than congruent sources (P < 0.01), with two exceptions. First, conservatives shared congruent and incongruent nonfake sources at similar rates. Second, we lacked the statistical power to assess sharing rates of conservatives exposed to liberal fake news, owing to the rarity of these events." Going back to the op-ed at the start of this column — do people fall for fake news because they're partisan or because they're lazy? — this is evidence on the "they're partisan" side of the ledger. The authors write:

These findings highlight congruency as the dominant factor in sharing decisions for political news. This is consistent with an extensive body of work showing that individuals evaluate belief-incongruent information more critically than belief-congruent information.

“I was extremely naive. I believed that people were simply misinformed.” The Guardian has a very sad article on what real life is like for people who were the victims of online conspiracy theorists: parents of children who were murdered at Sandy Hook; a Massachusetts resident with autism who was falsely pegged as the Parkland shooter; Brianna Wu, who got caught up in Gamergate. (For three of the five people profiled here, Infowars was heavily involved in the harassment.) Here’s Lenny Pozner, whose six-year-old son Noah was killed by the Sandy Hook shooter, on how he initially tried to confront the people who claimed that Sandy Hook was a hoax:

“I was extremely naive. I believed that people were simply misinformed and that if I released proof that my child had existed, thrived, loved and was loved, and was ultimately murdered, they would understand our grief, stop harassing us, and more importantly, stop defacing photos of Noah and defaming him online.”

Instead, he watched his deceased son buried a second time, under hundreds of pages of hateful web content. “I don’t think there’s any one word that fits the horror of it,” Pozner says. “It’s a phenomenon of the age which we’re in, modern day witch-hunts. It’s a form of mass delusion.”

Wu faces harassment to this day:

A woman turned up at her alma mater, the University of Mississippi, impersonating her in an attempt to acquire her college records. Someone else surreptitiously took photos of her as she went about her daily business. Wu was unaware of it until she received anonymous texts with pictures of her in coffee shops, restaurants, at the movies.

An accurate floor plan of her house was assembled and published online, along with her address and pictures of her car and license plate. And then there were the death threats — up to 300 by her estimate. One message on Twitter threatened to cut off her husband’s “tiny Asian penis.”

Pozner and his wife have had to move eight times in five years because people keep tracking down and publishing his address; he “has deliveries sent to a separate address and has rented multiple postal boxes as decoys.” Wu and her husband had to evacuate their house and stay with friends and in hotels. Instead of hunkering down, though, Pozner and Wu have fought back. Wu ran for a House seat in Massachusetts and lost in 2018 but has vowed to run again in 2020. Pozner, along with other Sandy Hook families, is fighting Alex Jones in court, with the families achieving recent victories in two separate defamation lawsuits, though there’s still a long way to go.

Semi-related: This article from MEL Magazine (which is the men’s digital magazine run by internet razor company Dollar Shave Club) is about what it’s like for women who watch their boyfriends become radicalized online. One commonality is that these men really suck to be around!

One story:

“Our relationship started normally: We went for walks, saw films, went out for dinner. Most of the ‘arguments’ we’d have would be where to go out on a date. When I moved in with him after graduation, the arguments were about who would do the washing up or the cooking that night,” she says. By the end of their relationship in September, though, she found herself having to not only try to get Craig to do his share of the laundry, but to justify why people should be allowed to speak languages other than English in public, why removing taxes for tampons isn’t unfair, and more bizarrely, why being a feminist isn’t the same as being a Nazi.

“Nearly all the arguments came from YouTube videos he was watching,” Sarah tells me. “Because he’d work at night, he’d spend the day on the internet. He’d be watching them, and send them to me throughout the day on WhatsApp, over email, anywhere really.” During one work meeting in 2016, she received videos from him about a “migrant invasion into Britain, orchestrated by Angela Merkel and Barack Obama,” which showed Libyan refugees getting off a boat carrying large bags and shouting, “Thank you, Merkel!” played over dark orchestral music. Other videos supported Donald Trump’s proposed ban on Muslim immigrants, diatribes on feminism “threatening traditional families” and “scientific evidence” suggesting that white people have higher IQs than black and South Asian people.

With each, he’d ask what her view on it was. Sometimes, she’d say she didn’t know, and he’d “send me more videos, or explain why they were correct.” Other times, when she’d disagree — for example, when it came to whether abortion should be legal — he’d get angry. “He would start off by saying I was wrong, demanding I explain my view — during a work day! When I wouldn’t respond to him immediately, he’d tell me that my view was stupid and idiotic and that I was just another ‘dumb leftie’ who didn’t know what they were talking about.”

Another:

With few friends around him and Ellen at university, he spent the majority of his time online, learning how to trade foreign currency via obscure blogs and YouTube tutorials before wading into more political waters. “It started off fairly mild,” Ellen says, with a slight laugh. “He would WhatsApp me Jordan Peterson lectures about ‘social justice warriors’ on university campuses. Sometimes I’d just ignore them, or say that I didn’t agree with what they were saying. Eventually, he moved on to more extreme material. He would send me videos by Stefan Molyneux about the links between race and IQ, or how it was scientifically proven that Conservative women were more attractive and left-wing women like me were fat and ugly.”

Also: YouTube, ugh.

Speaking of which, there’s also this piece at BuzzFeed from former Nieman Lab staffer Caroline O’Donovan and future New York Times opinion writer Charlie Warzel:

How many clicks through YouTube’s “Up Next” recommendations does it take to go from an anodyne PBS clip about the 116th United States Congress to an anti-immigrant video from a designated hate organization? Thanks to the site’s recommendation algorithm, just nine…

The Center for Immigration Studies, a think tank the Southern Poverty Law Center classified as an anti-immigrant hate group in 2016, posted the video to YouTube in 2011. But that designation didn’t stop YouTube’s Up Next from recommending it earlier this month after a search for “us house of representatives” conducted in a fresh search session with no viewing history, personal data, or browser cookies. YouTube’s top result for this query was a PBS NewsHour clip, but after clicking through eight of the platform’s top Up Next recommendations, it offered the Arizona rancher video alongside content from the Atlantic, the Wall Street Journal, and PragerU, a right-wing online “university.”

How exactly should we describe (and research) Fox News? Is Fox News propaganda or a reliable news source? How should the researchers who are studying it, and the people who are writing about it, label it? Jacob Nelson asked “a number of academics who have researched partisan news generally and Fox News specifically” how they characterize the most-watched basic cable network in America. Not surprisingly, they say: It’s complicated.

Though scholars like [Louisiana State University’s Kathleen] Searles assert that the categorization of Fox as a partisan news outlet akin to MSNBC continues to be accurate, others think that kind of comparison no longer applies. As [Rutgers associate professor Lauren] Feldman explains, “While MSNBC is certainly partisan and traffics in outrage and opinion, its reporting — even on its prime-time talk shows — has a much clearer relationship with facts than does coverage on Fox.” Princeton University assistant professor Andy Guess echoes this point: “There’s no doubt that primetime hosts on Fox News are increasingly comfortable trafficking in conspiracy theories and open appeals to nativism, which is a major difference from its liberal counterparts.”

But maybe, NYU visiting assistant professor and Tow Center fellow A.J. Bauer argues, Fox News really should be studied as a news outlet.

Taking conservative news seriously — granting that it is, indeed, a form of journalism — destabilizes our traditional normative ways of thinking about news and journalism. [But] those categories are already thoroughly destabilized among the general public, and it’s long since time that journalists and scholars reckoned with this problem directly.

(Bauer wrote a prediction for us last month, asking: “What happens to the conservative mediasphere when it loses its current center of gravity?”)

The red flags. Data & Society has a nice chart capturing “a step by step process for reading metadata from social media content. The goal for each step is to evaluate different types of ‘red flags’ — characteristics which can, when taken together indicate likely manipulation and coordinated inauthentic behavior.” Among those red flags: “Pervasive use of linkshorteners for automated messaging and mass content posting,” “total absence of content or geotags,” and “automated responses from other accounts (e.g., ‘Thanks for the follow! Check out my webpage!’).”

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Jan. 25, 2019, 9:37 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”