Nieman Foundation at Harvard
HOME
          
LATEST STORY
Facebook’s attempts to fight fake news seem to be working. (Twitter’s? Not so much.)
ABOUT                    SUBSCRIBE
Aug. 30, 2018, 10:08 a.m.
Audience & Social

Republicans who follow liberal Twitter bots actually become more conservative

“Instead of reducing political polarization, our intervention increased it.”

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

I’ll keep my echo chamber, thanks. Social media companies have been big on injecting “alternative views” into users’ feeds — the idea, seemingly, being that exposing people to values and beliefs that conflict with their own will expand their worldviews or making them more tolerant. (See also: a zillion different “burst your bubble” efforts. In some ways, this makes all the sense in the world. On the other hand, changing people’s minds is hard.

New research published in PNAS by Duke’s Chris Bail and others suggests that “disrupting selective exposure to partisan information among Twitter users” can actually backfire — and that conservatives who are exposed to liberal views actually become more entrenched in their previous beliefs, while liberals exposed to conservative viewpoints don’t double down nearly as much.

The researchers came up with a group of liberal and conservative U.S. Twitter users, then:

And here’s what happened:

The researchers also surveyed the users regularly to make sure that they were actually seeing the bots’ messages. From the paper:

Although treated Democrats exhibited slightly more liberal attitudes posttreatment that increase in size with level of compliance, none of these effects were statistically significant. Treated Republicans, by contrast, exhibited substantially more conservative views posttreatment. These effects also increase with level of compliance, but they are highly significant. Our most cautious estimate is that treated Republicans increased 0.12 points on a seven-point scale, although our model that estimates the effect of treatment upon fully compliant respondents indicates this effect is substantially larger (0.60 points). These estimates correspond to an increase in conservatism between 0.11 and 0.59 standard deviations.

There are caveats — most people in the U.S. aren’t on Twitter; this was a bot not a person; people who identify as independents weren’t surveyed; it seems highly possible that the financial incentives skewed the results in some way; and (as always!) this is Twitter, not…all of real life. Still, the study reveals “significant partisan differences in backfire effects,” and “we found no evidence that exposing Twitter users to opposing views reduces political polarization.”

This doesn’t mean that filter bubbles aren’t a problem.

The findings do suggest, however, that (once again) changing people’s minds is really hard — and that conservatives’ minds may be particularly difficult to change. Keep an eye out for more research from this new Duke Polarization Group. And, by the way, it looks as if Twitter is now suggesting accounts to unfollow.

Moving through a “space of hate.” What do we do with the “active haters” on Twitter — the really bad racists and misogynists, the ones who use the most awful words? This week at Northeastern’s Preconference on Politics and Computational Social Science, Northeastern professor Nick Beauchamp shared some recent research on “the light and dark side of online bubbles” — and how some of the most racist, misogynistic Twitter users “move through the space of hate throughout their careers.” (“There’s a bunch of Dutch people in our dataset,” “reflecting a recent surge in engagement in U.S. politics and hate speech by Dutch speakers.”) The research is from a forthcoming paper, “Trajectories of hate: mapping individual racism and misogyny on Twitter.”

Beauchamp and fellow authors Sarah Shugars and Ioana Panaitiu came up with a set of 1,000 “active haters,” Twitter users who both follow many members of the rightwing elite and also use a lot of hateful language (based on the list from Hatebase). What they wanted to know, in Beauchamp’s words: Does the “consumption benefit of racism shift…after the football season kicks in, or something like that?” It turns out, it kind of does: They saw a “clockwise aggregate flow” for tweets containing racist and misogynistic language.

I asked Beauchamp to explain what that means, and here’s what he told me:

The most virulent haters (both racists and misogynists) do seem to have an overall flow where the worst hate does eventually diminish, but what they’re doing instead — other topics, or just the same topics with less hate — is just speculative at this point. Our original theory was more about how they get there than about how they come back, and insofar as we had a theory of return it was more about general interests eventually shifting or regression to the mean than anything more specific — though it will definitely be worth looking more closely at how they get better, for those who do. The last figure in the paper does show a general flow from racism to misogyny, which may suggest where some of those racists went — though not a very optimistic outcome!

The researchers also find that racist and misogynistic speech are deeply connected. From the paper:

Most notably, hate speech of various forms are densely interconnected, with misogyny in particular intertwined with almost all other forms of hate. While racist speech is largely about or directed at black individuals or black people in general, misogynist speech appears both frequently in conjunction with specific (often Democratic in this corpus) women, as well as via terms of opprobrium for other men, including amidst Islamophobic and white-on-white attacks.

While this finding would be no surprise to scholars who emphasize intersectionality or historians of white supremacist movements, it is difficult to situate within the public opinion literature in American politics, which has typically treated attitudes about race and attitudes about gender as two separate entities.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

POSTED     Aug. 30, 2018, 10:08 a.m.
SEE MORE ON Audience & Social
SHARE THIS STORY
   
 
Join the 45,000 who get the freshest future-of-journalism news in our daily email.
Facebook’s attempts to fight fake news seem to be working. (Twitter’s? Not so much.)
Plus: How YouTubers spread far-right beliefs (don’t just blame algorithms), and another cry for less both-sides journalism.
Public or closed? How much activity really exists? See how other news organizations’ Facebook Groups are faring
We analyzed the data of groups as large as 40,000 members and as small as 300, from international organizations to local publishers. How does yours fit in?
Here’s what the Financial Times is doing to get bossy man voice out of (okay, less prominent in) its opinion section
“She wrote a fabulous piece that did incredibly well and I think there’s no way on earth that (a) she would have submitted or (b) it would have run, before we started this stuff. It got more than double the usual number of pageviews for an opinion piece.”