Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Oct. 6, 2017, 8:30 a.m.
Audience & Social

The Russian ads Facebook turned over to Congress are the tip of the iceberg 😬

Plus: How news organizations could work together to stop the spread of misinformation during breaking news events; fighting fake news on WhatsApp.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“They were working to lead people along and develop a sense of trust.” Jonathan Albright, the research director at the Tow Center for Digital Journalism, on Thursday published to Tableau his research into how six election-related, now-closed, Russian-controlled Facebook accounts spread content in the U.S. The Washington Post’s Craig Timberg wrote up Albright’s findings:

For six of the sites that have been made public — Blacktivists, United Muslims of America, Being Patriotic, Heart of Texas, Secured Borders and LGBT United — Albright found that the content had been “shared” 340 million times. That’s from a tiny sliver of the 470 accounts that have been made public. Even if those sites were unusually effective compared to the 464 others, Albright’s findings still suggest a total reach well into the billions of “shares” on Facebook.

The terminology is important here. For the purposes of these metrics, a “share” is essentially how often a post may have made its way into somebody’s Facebook “news feed” — without determining whether any of these users actually read the post. Another metric, called “interactions,” counts something narrower but more important — the number of times individual users acted on what they had read by sharing a post with their Facebook “friends,” hitting the “like” button, making a comment or posting an emoji symbol.

That measurement for those six accounts, Albright’s research showed, was 19.1 million. That means that more people had direct “interactions” with regular posts from just six accounts than saw the ads from all 470 pages and accounts that Facebook has identified as controlled by the Russian troll farm in St. Petersburg, called the Internet Research Agency.

Thread:

Here’s what some of the Russian-controlled Facebook accounts looked like.

“A prototype for stemming the flood of misinformation during breaking news events.” The shooting in Las Vegas this week quickly led to dozens of hoaxes on social media (see also this). Gabriel Stein, writing for Misinfocon, offers a good summary of the situation and lays out a plan for how folks in the media could (if they “minimally cooperate”) counter this misinformation. Briefly:

I propose that news organizations counter this misinformation by using the combined power of their algorithmically authoritative websites and reporters on social media as one of these cooperative propaganda networks. With any luck, this coordinated effort will have the effect of getting high-quality news to the top of algorithmically compiled trending sections during breaking news events.

How to fight fake news on WhatsApp. “With WhatsApp, you have no idea how many people are reading what you’re putting in there. It’s like a black box,” Juan Esteban Lewin, a journalist at the Colombian fact-checking organization La Silla Vacía, tells Poynter’s Daniel Funke. Fake content varies by region.

In Argentina and Colombia, messages are often political, containing misinformation about local and national elections. Last month, [Argentinian site] Chequeado debunked a meme found on WhatsApp claiming voters could write in votes against animal abuse on the primary election ballot, when in fact that would nullify their vote. In Colombia, Lewin said La Silla Vacía is doing at least one WhatsApp-based fact check per week and has found that the two biggest topics are the FARC and next year’s congressional and presidential elections.

Meanwhile, in sub-Saharan Africa, [Kate Wilkinson of Africa Check] said most fake news she’s seen isn’t political at all.

“The viral misinformation that we see on WhatsApp is largely messages about some impending danger,” she said. “It’s mainly people passing around messages about crime, violence and severe weather.”

Since WhatsApp groups are limited to 256 members and messages are encrypted, fact-checking organizations are trying to reach out directly to individual users, and also relying on users to send fake news they find in groups to their institutional WhatsApp accounts.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Oct. 6, 2017, 8:30 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”