Nieman Foundation at Harvard
After criticism over “viewpoint diversity,” NPR adds new layers of editorial oversight
ABOUT                    SUBSCRIBE
April 6, 2018, 9:20 a.m.
Audience & Social

“A place of clashing priorities, mistrust, and suspicion”: Guess what this refers to

Hint: Facebook is involved. Plus: Sketchy government efforts against fake news (or “fake news”) in India and Malaysia.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

It all begins with the dashboard. That’s a takeaway from a new report by USC Annenberg assistant professor Mike Ananny, written for the Tow Center for Digital Journalism with funding from Craig Newmark and Knight, about Facebook’s collaboration with publishers and fact-checking organizations to combat misinformation. Ananny specifies that this report isn’t about fake news, but rather about “the values and tensions underlying partnerships between news organizations and technology companies.” Regardless, it provides a bunch of juicy-ish details:

— Everything starts with a fact-checking dashboard. We can’t see it, we only hear about it.

Here’s how it works: through a proprietary process that mixes algorithmic and human intervention, Facebook identifies candidate stories; these stories are then served to the five news and fact-checking partners through a partners-only dashboard that ranks stories according to popularity. Partners independently choose stories from the dashboard, do their usual fact-checking work, and append their fact-checks to the stories’ entries in the dashboards. Facebook uses these fact-checks to adjust whether and how it shows potentially false stories to its users…

The news and fact-checking partners I spoke with said they had virtually no input into the creation or design of this dashboard and were skeptical that Facebook alone had the capability to independently design a solution to such a complex, multifaceted problem. After the initial partnership was announced, Facebook product designers and engineers created the dashboard and then gave partners access to it.

Although no partner would provide me access to the dashboard or show me screenshots of it, all described it and the associated workflow similarly.

— The publishers and fact-checking organizations appear skeptical of Facebook’s motives, with one partner saying their participation in Facebook’s program came out of a sense of “ethical duty”:

Even though this person doubted the partnership’s value, news and fact-checking organizations need to engage with Facebook because, they said, it was “privileging dark ads” and creating a platform that has “contributed to genocides, slave auctioning, and migrant violence.” This interviewee sees Facebook as a media company and a public utility, and viewed participation in the partnership as one way to hold the company accountable.

— They are also highly skeptical about the dashboard where they find the stories to fact-check:

One participant described deep skepticism about the dashboard’s “popularity” metric, saying they suspected (but could not prove) that stories with high advertising revenue potential would never appear on that list because “sometimes fake news can make money.” Facebook, they suspected, would not want to fact-checkers to debunk high-earning stories. Regardless of whether this is true, this skepticism paints the dashboard as a place where clashing priorities, mistrust, or misaligned values can surface through seemingly innocuous design and engineering choices.

Another said “we aren’t seeing [on the dashboard] seeing major conspiracy theories or conservative media — no InfoWars on the list, that’s a surprise.”

— “Several mentioned that The Washington Post was originally supposed to be a partnership member, but withdrew for reasons they thought might be significant but would not comment on. (I was unable to speak with staff at The Post.)” (Hey Posties, bend my ear.)

— Poynter has reported that the teams are each receiving $100,000 a year. Some “refused to accept any payment from the platform. They said that accepting such funding would, to them, breach their independence.” Others are taking the money: “Our model is, if we do the work, you need to buy it. Facebook is using it, and benefiting from it, so we should be compensated for it.”

— More skepticism over the dashboard: One partner said, “Most of the stories [on the dashboard] are SO wrong, they’re kind of easy.” That partner “later wondered if they were missing out on fighting more complex and powerful forms of misinformation that did not meet Facebook’s threshold for popularity.”

— Facebook really does not share data with its partners.

In response to my question about what impact they thought the partnership was having, one partner said only, “I just can’t answer that question without data that only Facebook has.” Citing a leaked email in which Facebook claimed that a “news story that’s been labeled false by Facebook’s third-party fact-checking partners sees its future impressions on the platform drop by 80 percent,” several partners expressed skepticism about this number, saying, “I don’t know how that number is calculated” and “we have no public proof of that” and “I can’t fact-check that claim, and that’s a problem.”

— One site that was defined as particularly problematic by “all” partners: America’s Last Line of Defense, which describes itself as “a collection of the satirical whimsies of liberal trolls masquerading as conservatives” with the mission “to provide an entertaining safe haven from the real news of the day.” Visiting this page is just kind of confusing. “Some partners thought Facebook should ban the site from its platform, others thought it should simply be moved into a secondary area, or that it was sufficient to attach fact-checks to stories that circulated on the website.”

In a follow-up interview, Ananny told CJR:

In an ideal world, honestly, if we could have the editorial and public ethics represented by news organizations, and really good public regulation and public accountability, if we could marry that with a lot of the engineering power and product design power, that would be the holy grail: to do these things with real ethics and principles, and to debate what those ethics are.

I want to see the damn dashboard. (Again, ears are meant to be bent.)

India backtracks on a law against fake news. The Indian government announced suddenly on Monday of this week that it would suspend journalists’ accreditations if they created or spread fake news, but abruptly withdrew the guidelines the following day. Sadanand Dhume wrote for The Wall Street Journal:

The abortive clampdown comes amid a heightened media focus on politically well-connected fake-news sites. Last week police in the southern state of Karnataka arrested Mahesh Vikram Hegde, a co-founder of Postcard News. Mr. Hegde’s arrest was sparked by a tweet that showed a picture of a visibly bruised Jain monk with a caption claiming that the holy man was “attacked by Muslim youth.” The monk had in fact been involved in a minor traffic accident…

In many ways, Postcard News highlights the difficulty of cracking down on fake news in India. While no ideology or party has a monopoly on the phenomenon, a clutch of fake-news sites in both English and Hindi promote an agenda in sync with both hard-line Hindu nationalists and the BJP government.

Malaysia actually did pass a law against fake news this week. The New York Times’ Hannah Beech:

The proposal, which allows for up to six years in prison for publishing or circulating misleading information, is expected to pass the Senate this week and to come into effect soon after.

The legislation would punish not only those who are behind fake news but also anyone who maliciously spreads such material. Online service providers would be responsible for third-party content, and anyone could lodge a complaint. As long as Malaysia or Malaysians are affected, fake news generated outside the country is also subject to prosecution.

What qualifies as fake news, however, is ill defined. Ultimately, the government would be given broad latitude to decide what constitutes fact in Malaysia.

The BBC’s Jonathan Head reported on the Malaysian law:

It is not clear anyway that Malaysia has a serious fake news problem.

In a response to the concerns expressed about the new law, the communications and multimedia minister Salleh Said Keruak highlighted the foreign media’s failure to get the sometimes complicated string of official titles for high-ranking Malaysians right — irritating, yes, but hardly a threat to national security.

The article goes on to excoriate mainstream media which have published negative pieces about [Prime Minister Najib Razak], calling them fake news, and thus rather confirming suspicions that the law is aimed at them, rather than the manipulation of social media opinion through fraudulent Facebook accounts and automated Twitter bots.

The Economist calls these and related efforts in the Philippines and Singapore “phoney assaults on fake news”:

A furore in India this week shows what can go wrong. The ministry of information issued rules that would have revoked the credentials of journalists found to be peddling falsehoods. Supportive ministers shared links from The True Picture, an online outfit supposedly dedicated to identifying fake news. But the site, it turns out, was actually run by the media team of Narendra Modi, the prime minister. He abruptly ordered the ministry to rescind its new rules, which had been in force for less than a day. Governments, it seems, are no better than anyone else at discerning genuine news from the fake sort or — worse — no more inclined to truthfulness than those whom they so eagerly denounce.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email ( or Twitter DM (@laurahazardowen).
POSTED     April 6, 2018, 9:20 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
After criticism over “viewpoint diversity,” NPR adds new layers of editorial oversight
“We will all have to adjust to a new workflow. If it is a bottleneck, it will be a failure.”
“Impossible to approach the reporting the way I normally would”: How Rachel Aviv wrote that New Yorker story on Lucy Letby
“So much of the media coverage — and the trial itself — started at the point at which we’ve determined that [Lucy] Letby is an evil murderer; all her texts, notes, and movements are then viewed through that lens.”
Increasingly stress-inducing subject lines helped The Intercept surpass its fundraising goal
“We feel like we really owe it to our readers to be honest about the stakes and to let them know that we truly cannot do this work without them.”