The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
Maps gone viral. Here’s a cool paper on viral maps — “maps that reach rapid popularity via social media dissemination” — and how they may be used to spread misinformation. Pennsylvania State University’s Anthony Robinson looked at Nate Silver’s “What if only women voted” 2016 election map and the maps inspired by that tweet. (A Twitter search for “map if only voted” turned up more than 500 unique maps — that’s them illustrating this article.)
One map in particular — a kind of uber map, “2016 US Presidential Electoral Map If Only [X] Voted” — was created in response to Silver’s tweet by a graduate student named Ste Kinney-Fields, using data from FiveThirtyEight and 270towin.com, and was widely disseminated leading up to the election. “If only [X] Voted” wasn’t created with the intention of misinforming anyone, but it spread widely: “We found multiple conservative blogs hosted on WordPress which appeared to be using this map image to drive discussions about its political implications,” Robinson wrote.
That’s putting it somewhat politely — sifting through the supplemental materials for the paper, I found Kinney-Fields’ map being used as an illustration on several white supremacist sites in order to argue why only white men should be able to vote.
Kinney-Fields ended up posting on Medium an explanation of how and why the map was created in the first place, noting:
Data is beautiful but visualizations aren’t value neutral. If your data is about race and gender in America, then it is important to take the opportunity to frame the data in explicitly anti-sexist and anti-racist terms. My visualization was used by white supremacists in part because I didn’t do that. I don’t know that there’s anything that would stop them from using this kind of data for their racist gain but it does suck to have my work used that way. I wish I had made the post explicitly anti-racist to begin with. I didn’t and I will continue to deal with that.
“Maps are graphics we trust all the time,” Robinson told Fast Company’s Katharine Schwab. “The attributes of a map that can convince somebody that something’s real haven’t really changed, and that’s part of the problem. My hypothesis is that when people see a map — even when it’s shared on a social media platform, and maybe they should be skeptical — they may not understand it’s being amplified in a certain direction to influence them.”
Back to school. It’s back-to-school time, which means a spate of publications on media literacy. A roundup of a few:
The theme of the most recent issue of Social Education is “the experience, methodologies, resources and research of leading media literacy organizations.” One article looks at techniques for how to help students (or, you know, anyone ) confront their hidden biases, “letting…complexity” into classrooms without “letting it undermine our students’ learning experience.” Here are a couple of the suggestions from Elizaveta Friesem:
— Focus on power imbalances: Everybody’s biased, but the biases of media conglomerates and big tech companies’ algorithms have the power to affect the most people, so students should be taught to pay special attention to them — particularly since algorithms can “reinforce power imbalances between different social groups and individuals in stealthy hidden ways.”
— “Follow the money.” Students should be taught to look for the funding behind different sides of an argument.
This is particularly critical for topics such as climate change, where the scientists who comprise the academic consensus stating that the global warming is real typically have no conflict of interest, while the people who make it their job to publicly argue that climate change is not humanmade are often funded by corporate interests vested in maintaining the energy status-quo.
Meanwhile, in the journal Communication Education, Nicole M. Lee issues a reminder that it isn’t just students who need digital media literacy training — it’s adults and, especially, the elderly. People ages 60 and older are the primary victims of cybercrimes in the U.S., and this group lost a combined $339 million from cyber scams in 2016, according to the FBI. Yet “there is a paucity of research on effective strategies for educating adults in general and nondigital natives in particular about safe social media use,” Lee writes, “including protecting one’s privacy, recognizing false information, and avoiding scams.”
Solutions may not be sexy or particularly tech-y. “Studies can explore variables such as political ideology, innate skepticism, age, and gender. For example, are older adults more likely to prefer print materials? Or, will certain sources of information (e.g., academics or government agencies) be less effective depending on participants’ political ideology?”
Anti-vaxxing bots and #VaccinateUS Russian trolls. A George Washington University analysis of nearly 1.8 million tweets sent between 2014 and 2017, published recently in the American Journal of Public Health, found that both bots and Russian trolls tweeted about vaccination at higher rates than average Twitter users. While the bots were more likely to post anti-vaxxing content, the Russian trolls “amplified both sides” in an (ultimately probably unsuccessful) attempt to create discord, often using the hashtag #VaccinateUS. And “a full 93 percent of tweets about vaccines are generated by accounts whose provenance can be verified as neither bots nor human users yet who exhibit malicious behaviors. These unidentified accounts preferentially tweet antivaccine misinformation.” (The researchers categorized the tweets as bot-or-not using Botometer, a tool that isn’t always accurate — hence the wiggle room.) The researchers suggest that public health officials need to keep these ambiguous accounts in mind when they’re trying to figure out how to combat anti-vaccine messaging:
The highest proportion of antivaccine content is generated by accounts with unknown or intermediate bot scores. Although we speculate that this set of accounts contains more sophisticated bots, trolls, and cyborgs, their provenance is ultimately unknown. Therefore, beyond attempting to prevent bots from spreading messages over social media, public health practitioners should focus on combating the messages themselves while not feeding the trolls. This is a ripe area for future research.
“Misinfodemics.” The vaccine-tweets research should be read in partnership with this Atlantic piece on “misinfodemics” by Nat Gyenes and An Xiao Mina:
We know that memes — whether about cute animals or health-related misinformation — spread like viruses: mutating, shifting, and adapting rapidly until one idea finds an optimal form and spreads quickly. What we have yet to develop are effective ways to identify, test, and vaccinate against these misinfo-memes. One of the great challenges ahead is identifying a memetic theory of disease that takes into account how digital virality and its surprising, unexpected spread can in turn have real-world public-health effects. Until that happens, we should expect more misinfodemics that endanger outbreaks of measles, Ebola, and tooth decay, where public-health practitioners must simultaneously battle the spread of disease and the spread of misinformation.