Nieman Foundation at Harvard
HOME
          
LATEST STORY
Why “Sorry, I don’t know” is sometimes the best answer: The Washington Post’s technology chief on its first AI chatbot
ABOUT                    SUBSCRIBE
Oct. 6, 2017, 8:30 a.m.
Audience & Social

The Russian ads Facebook turned over to Congress are the tip of the iceberg 😬

Plus: How news organizations could work together to stop the spread of misinformation during breaking news events; fighting fake news on WhatsApp.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“They were working to lead people along and develop a sense of trust.” Jonathan Albright, the research director at the Tow Center for Digital Journalism, on Thursday published to Tableau his research into how six election-related, now-closed, Russian-controlled Facebook accounts spread content in the U.S. The Washington Post’s Craig Timberg wrote up Albright’s findings:

For six of the sites that have been made public — Blacktivists, United Muslims of America, Being Patriotic, Heart of Texas, Secured Borders and LGBT United — Albright found that the content had been “shared” 340 million times. That’s from a tiny sliver of the 470 accounts that have been made public. Even if those sites were unusually effective compared to the 464 others, Albright’s findings still suggest a total reach well into the billions of “shares” on Facebook.

The terminology is important here. For the purposes of these metrics, a “share” is essentially how often a post may have made its way into somebody’s Facebook “news feed” — without determining whether any of these users actually read the post. Another metric, called “interactions,” counts something narrower but more important — the number of times individual users acted on what they had read by sharing a post with their Facebook “friends,” hitting the “like” button, making a comment or posting an emoji symbol.

That measurement for those six accounts, Albright’s research showed, was 19.1 million. That means that more people had direct “interactions” with regular posts from just six accounts than saw the ads from all 470 pages and accounts that Facebook has identified as controlled by the Russian troll farm in St. Petersburg, called the Internet Research Agency.

Thread:

Here’s what some of the Russian-controlled Facebook accounts looked like.

“A prototype for stemming the flood of misinformation during breaking news events.” The shooting in Las Vegas this week quickly led to dozens of hoaxes on social media (see also this). Gabriel Stein, writing for Misinfocon, offers a good summary of the situation and lays out a plan for how folks in the media could (if they “minimally cooperate”) counter this misinformation. Briefly:

I propose that news organizations counter this misinformation by using the combined power of their algorithmically authoritative websites and reporters on social media as one of these cooperative propaganda networks. With any luck, this coordinated effort will have the effect of getting high-quality news to the top of algorithmically compiled trending sections during breaking news events.

How to fight fake news on WhatsApp. “With WhatsApp, you have no idea how many people are reading what you’re putting in there. It’s like a black box,” Juan Esteban Lewin, a journalist at the Colombian fact-checking organization La Silla Vacía, tells Poynter’s Daniel Funke. Fake content varies by region.

In Argentina and Colombia, messages are often political, containing misinformation about local and national elections. Last month, [Argentinian site] Chequeado debunked a meme found on WhatsApp claiming voters could write in votes against animal abuse on the primary election ballot, when in fact that would nullify their vote. In Colombia, Lewin said La Silla Vacía is doing at least one WhatsApp-based fact check per week and has found that the two biggest topics are the FARC and next year’s congressional and presidential elections.

Meanwhile, in sub-Saharan Africa, [Kate Wilkinson of Africa Check] said most fake news she’s seen isn’t political at all.

“The viral misinformation that we see on WhatsApp is largely messages about some impending danger,” she said. “It’s mainly people passing around messages about crime, violence and severe weather.”

Since WhatsApp groups are limited to 256 members and messages are encrypted, fact-checking organizations are trying to reach out directly to individual users, and also relying on users to send fake news they find in groups to their institutional WhatsApp accounts.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Oct. 6, 2017, 8:30 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Why “Sorry, I don’t know” is sometimes the best answer: The Washington Post’s technology chief on its first AI chatbot
“For Google, that might be failure mode…but for us, that is success,” says the Post’s Vineet Khosla
Browser cookies, as unkillable as cockroaches, won’t be leaving Google Chrome after all
Google — which planned to block third-party cookies in 2022, then 2023, then 2024, then 2025 — now says it won’t block them after all. A big win for adtech, but what about publishers?
Would you pay to be able to quit TikTok and Instagram? You’d be surprised how many would
“The relationship he has uncovered is more like the co-dependence seen in a destructive relationship, or the way we relate to addictive products such as tobacco that we know are doing us harm.”