Nieman Foundation at Harvard
You’re more likely to believe fake news shared by someone you barely know than by your best friend
ABOUT                    SUBSCRIBE
Nov. 16, 2018, 9:30 a.m.
Audience & Social

Facebook probably didn’t want to be denying it paid people to create fake news this week, but here we are

Plus: WhatsApp pays for misinformation research and a look at fake midterm-related accounts (“heavy on memes, light on language”).

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Notifications every 2 minutes. The most interesting real-news-about-fake-news this week was the BBC’s in-depth research into how information — and misinformation — spreads via WhatApp in India. Read all about that here.

(Also elsewhere on Nieman Lab, be sure to check out this piece by Francesco Marconi and Till Daldrup on how The Wall Street Journal is training its journalists to look out for deepfakes — the AI-generated videos that can make people appear to say and do things they really didn’t.)

WhatsApp provides $1 million for misinformation research. Speaking of the globe-spanning chat app, which announced in July that it would fund misinformation-related research: WhatsApp said this week that it’s giving $50,000 each to 20 projects from 11 countries. Among the topics getting funding (Poynter’s Daniel Funke has the full list):

“Is correction fluid? How to make fact-checks on WhatsApp more effective”

“Seeing is believing: Is video modality more powerful in spreading fake news?”

“Misinformation vulnerabilities among elderly during disease outbreaks”

“Values and arguments in the assimilation and propagation of disinformation”

“WhatsApp group and digital literacy among Indonesian women”

“We failed to look and try to imagine what was hiding behind corners.” The New York Times published an alarming look at how Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg “ignored warning signs” of the extent of misinformation on their platform — beginning with its role in the 2016 U.S. presidential election — “and then sought to conceal them from public view.” Among the most egregious parts of the piece reveals how, beginning in October 2017, Facebook ramped up its work with Definers Public Affairs, a Washington-based consultancy “founded by veterans of Republican presidential politics” like former Jeb Bush spokesman (and Crooked Media contributor) Tim Miller that “specialized in applying political campaign tactics to corporate public relations — an approach long employed in Washington by big telecommunications firms and activist hedge fund managers, but less common in tech.” The company worked with Definers to, among other things, “discredit activist protesters, in part by linking them to the liberal financier George Soros.”

The Times’ own TL;DR of the story is here.

Facebook responded to the Times’ allegations in a brief post, saying it cut ties with Definers on Wednesday night and claiming that “Mark and Sheryl have been deeply involved in the fight against false news and information operations on Facebook.” George Soros called for an “independent, internal investigation of [Facebook’s] lobbying and public relations work.” This morning, Sandberg told CBS News:

We absolutely did not pay anyone to create fake news — that they have assured me was not happening. And again, we’re doing a thorough look into what happened but they have assured me that we were not paying anyone to either write or promote anything that was false. And that’s very important.

“Heavy on memes, light on language.” Facebook also posted an update about its takedown of “36 Facebook accounts, 6 pages, and 99 Instagram accounts for coordinated inauthentic behavior” on November 5, following an FBI tip. The source of the accounts hasn’t been confirmed, but the content is similar to that created by Russia’s Internet Research Agency. The Atlantic Council’s Digital Forensics Research Labs has been analyzing the content and published some of its findings in a Medium post:

Just like the original troll factory, these accounts posted highly divisive content which targeted both sides of America’s partisan gulf. Some, such as @black.voices__ and @ur.melanated.mind, focused on African American communities. Others, such as @maga.people and @4merican.m4de, posed as conservatives. @feminism_4ever and @lgbt_poc focused on gender and race issues; @american.atheist_ and @proud_muslims focused on religion…

The suspect accounts paid particular attention to race and gender issues, including the hyper-sensitive questions of violence against African Americans, and transgender rights.

Some examples from DFR’s Ben Nimmo:

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email ( or Twitter DM (@laurahazardowen).
POSTED     Nov. 16, 2018, 9:30 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
You’re more likely to believe fake news shared by someone you barely know than by your best friend
“The strength of weak ties” applies to misinformation, too.
To find readers for longform investigations, Public Health Watch leans on partners and in-person work
Nonprofit newsrooms are competing for limited funding and attention spans, grappling with diminishing returns on social, and trying to address low trust in media. It’s forcing outlets large and small to adapt to survive.
Could social media support healthy online conversations? New_ Public is working on it
“We talk to a lot of towns where there is no newspaper anymore; there’s no community center anymore; the town store shut down. And this is kind of it.”