Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
Nov. 22, 2017, 8:30 a.m.
Audience & Social

“Checking Twitter…while being rushed into a bunker”: Considering fake news and nuclear war

Plus: The EU is surveying its citizens on fake news; what CrossCheck learned in France; the upcoming Disinformation Action Lab.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Coming soon: The Disinformation Action Lab. Part of a group of Knight grants announced last week: The Data & Society Research Institute is getting $250,000 to launch the Disinformation Action Lab, which will “use research to explore issues such as: how fake news narratives propagate; how to detect coordinated social media campaigns; and how to limit adversaries who are deliberately spreading misinformation. To understand where online manipulation is headed, it will analyze the technology and tactics being used by players at the international and domestic level.” It continues the work of Data & Society’s Media Manipulation initiative (one of whose reports I covered here).

The details of the Disinformation Action Lab — including who will be hired to lead it — are still being worked out, said Sam Hinds García, Data & Society’s director of communications. The publication of the May report “opened the door for us, institutionally, to have a lot of interesting conversations across different sectors with people who were noticing the same themes and phenomena and hadn’t really aligned around them. The concept of the Disinformation Lab is to engage those actors, platforms, investigative journalists, activists, and less obvious people trying to puzzle through these themes, to try to analyze threats of propaganda and disinformation with a larger set of research methods.”

The focus will be on “testable interventions, grounded in empirical research,” she added. You can sign up to get updates on the project here.

Here’s how CrossCheck France did. First Draft released a report about its CrossCheck France project, which brought together “37 newsrooms, universities, nonprofits, and tech companies to fight rumors and fabrications around” the 2017 French presidential election. My colleague Shan Wang covered CrossCheck France here and here; from Shan’s May piece, here are some of the things that the team had wondered about:

How did the way debunking headlines were worded change readers’ responses? What about the use of visuals? Importantly, did having multiple newsrooms — and multiple logos appended to each fact check — actually increase readers’ trust in the information? Or did it feed into the narrative of an out-of-touch media elite, huddling together? (“We did get people who asked, ‘Who’s behind you?’ ‘Who’s funding you?’” [First Draft’s Claire] Wardle said.)

The new report — which was done by independent researchers commissioned by Cross Check — found overall that “CrossCheck appears to have gained the trust of a large and politically diverse audience…The fact that the project included local outlets appears to have been one of the reasons why the project reached people across the political spectrum. The perceived impartiality of the project was also one of the reasons that it appealed to a wide audience.” The report includes interviews with audience members and the journalists who worked on the project themselves, and also notes three “future considerations”:

1. Undertaking additional research on effective debunks using images and videos. As the project evolved, changes took place to the original processes. For example, it became clear that including screenshots as the ‘hero’ image on the posts (which then got automatically dragged into social media posts) meant that CrossCheck was perpetuating the original piece of fabricated content. [Agence France-Presse] therefore designed a graphic template which allowed editors to use these alongside any image that referenced the fabricated content…The impact of this needs to be researched in greater detail. In addition, towards the end of the project, CrossCheck editors started making short explainer videos for Facebook. The metrics immediately showed that they were being shared widely but more research needs to be undertaken about the most effective ways of creating video based debunks and fact-checks.

2. Understanding the “tipping point.” Reporting on disinformation requires different considerations, and the threat of giving oxygen to rumors, means that newsrooms will need to give additional thought to when and how to report on these types of stories.

During CrossCheck, decisions were taken collectively. More analysis needs to be undertaken about where this tipping point sits, and what metrics journalists should be looking at before they decide whether and how to publish a story on a particularly rumor or piece of fabricated content.

3. Understanding the importance of cultural and time-bound contexts for collaborative projects. It is very likely that CrossCheck would never have got off the ground if First Draft had had a longer lead time (which would have given senior editors more time to say “no”) or if there hadn’t just been the active conversations about disinformation and its impact on the US presidential election.

While the results of this research have been very positive, attempts to run similar projects around the UK and German elections have been less successful at getting newsrooms to collaborate. It’s important we understand why CrossCheck worked in the French context.

“An EU-level strategy on how to tackle the spreading of fake news.” The European Commission launched “a public consultation on fake news and online disinformation and set up a High-Level Expert Group representing academics, online platforms, news media and civil society organizations.” The public can weigh in here, through February 23, 2018; there’s one questionnaire for citizens and one for “legal entities and journalists reflecting their professional experience of fake news and online disinformation.”

“A leader, considering or warned of a nuclear attack, is unlikely to be checking Twitter notifications while being rushed into a bunker. There is no time for that.” A terrifyingly hilarious (hilariously terrifying?) memo (entitled “Three Tweets to Midnight: Nuclear Crisis Stability and the Information Ecosystem“) from think tank The Stanley Foundation looks at “facets of the modern information ecosystem and how they might affect decision-making involving the use of nuclear weapons, based on insights from a multidisciplinary roundtable.”

The memo, which somehow manages to avoid ever mentioning Trump by name, acknowledges that “because the impact of social media on international crisis stability is recent, there are few cases from which to draw conclusions.” Instead, you’ll be comforted to know that there are “more questions than answers” — among the questions:

— To what degree does the information ecosystem make it easier for a leader to use bad information, disinformation, or questionable alternative information sources to shape or buttress his or her preferred decision?
— How do leaders factor messages on social media into perceptions of adversary signals? What messages on social media, and in which contexts, might be effective at signaling? How does the proliferation of message channels affect signal consistency?
— How might online belittling and humiliation affect the emotional state of a decision-maker in a crisis?
— How might the information ecosystem change the likelihood that a leader gets caught in a commitment trap or is able to escape one?
— How and to what extent, if any, could an online public opinion firestorm calling for war from a leader’s political base predispose him or her to escalate a crisis or use nuclear weapons first?
— How might a leader instigate such an online firestorm? How could an adversary, or third party, spark such a firestorm through disinformation?

“They are basically buying good PR by paying us.” The third-party factcheckers working with Facebook are frustrated, reports Sam Levin for The Guardian.

“We’re sort of in the dark. We don’t know what is actually happening,” said Alexios Mantzarlis, director of the International Fact-Checking Network at Poynter, which verifies Facebook’s third-party factcheckers.

He said he appreciated that there “are a lot of people at Facebook who really care about this” but, he added, “the level of information that is being handed out is entirely insufficient…This is potentially the largest real-life experiment in countering misinformation in history. We could have been having an enormous amount of information and data.”

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Nov. 22, 2017, 8:30 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”