Two years ago, for a research project I was working on, I talked to journalists about the ways user-generated content was handled in their newsrooms. During one interview, an editor actually recreated the groan that went around the news meetings whenever she brought up the “v-word.” She was talking about verification. As we stumble towards the end of 2016, with everyone obsessing over misinformation, I don’t think you’d find any newsroom referring to verification in this way. I would hope not, anyway.
I predict newsrooms will make social discovery and verification skills something they specifically seek in new hires in 2017. This is not the time to be fooled by a photoshopped image, mistaken when a video emerges with fake BBC branding, or crediting the wrong person on screen because an image emerged on Twitter when it actually originated in a WhatsApp group and was captured by someone completely different.
The misinformation ecosystem is much more nuanced than simply fake news. Over the weekend, I saw the inevitable backlash against the term “fake news.” But as Alexios Mantzarlis of Poynter’s International Fact-Checking Network pointed out on Twitter: “the fact that so many are misusing the term ‘fake news’ doesn’t mean it’s not a thing at all.” We need to agree a taxonomy to explain the complexities of the misinformation ecosystem. Until we do that, our attempts at finding solutions are becoming circular.
As I explained in a recent piece for CJR, during this U.S. election I counted six different, but specific types of misinformation, from “real” content used in the wrong context, to photoshopped images, to video content using branding from mainstream media, to parody content. The full spectrum of misinformation is much broader and I predict we’ll see work to create a definitive typology so we can have a shared understanding of what we mean when we use different terms.
Eliot Higgins of Bellingcat has already shown what is possible when social discovery and verification techniques are applied on long-form investigative questions. The work of Eliot and his team to investigate the downing of flight MH17 using clues that emerged on social media demonstrated what is possible.
ProPublica’s Electionland project, an initiative with which First Draft was directly involved, also demonstrated what is possible. With an army of 660 students at 14 journalism schools, all trained in social verification techniques, we were able to find and corroborate over 1,000 reports which emerged on social media during election day.
We hope to use similar methodologies to monitor the French election in the runup to the April polling day, and are in the planning stages of a project to map hate crimes in the U.S. and to monitor the German election as well.
We often take the blue verification tick for granted, but one day it became the way we judged the quality of an account across different social networks. It’s an example of an accepted visual grammar. As the arguments swirl about fake news and the misinformation ecosystem, and as social networks push back on the idea that they will ever be the arbiters of truth, the inevitable next step, I hope, is that we will see visual cues on content that will allow users to navigate the content they discover via the social web themselves. These visual cues would allow a new level of transparency around content: for example, allowing users to judge how long a piece of content has been circulating; allowing users to see whether the content was shared by someone who has previously had content flagged on a social network; or at the most basic, allowing users to see when was the content first created — for example, when the domain was registered.
My hope is that these new visual cues are created collaboratively, with designers from all the social networks taking advice from social psychologists on the most effective standardised visual flags. As Chris Blow from Meedan argues, we need to think about positive visual cues, rather than simply negative red “debunked” or “fake” stamps on images.
Just after the Paris attacks of November 2015, I wrote a post expressing dismay that while news organizations often published roundup listicles after breaking news events (e.g. “5 hoaxes you shouldn’t have fallen for during the Paris attacks”), this type of work was not being undertaken in realtime on the social platforms themselves. Comments in response to the post pointed me to some of the amazing work happening in France around realtime debunking. If you haven’t seen it (and can speak French!) I would really recommend taking a look at Les Decodeurs, the Verifie Twitter account from Buzzfeed France, and the Instant Redux TV segment on France Info.
BuzzFeed in the U.S. followed this model during the U.S. election, often sharing realtime debunks. During the first debate, when Donald Trump denied that he’d ever said that global warming was created by the Chinese, a photoshopped tweet started doing the rounds saying that the Trump team had deleted a tweet that said exactly that from 2012. That assertion was false and BuzzFeed quickly debunked it on Twitter before the debate had ended.
A question remains whether newsrooms should debunk information on the social web. If they don’t do it for all information, audiences might conclude that something is true if they don’t see the debunk. However, I would argue that during a breaking news story, newsrooms should collaborate around the hoaxes that always do the rounds, whether it’s the pictures of comedian Sam Hyde popping up after active shooting situations, old imagery re-circulating after natural disasters, or people falsely claiming to be victims of an attack looking for media attention. My hope for 2017 is that newsrooms will start to collaborate in realtime on debunking efforts, which will save precious time and resources and free up journalists to guide audiences through the torrent of misinformation that emerges when a news event breaks.
Claire Wardle is former research director of the Tow Center for Digital Journalism, now working at First Draft.