Nieman Foundation at Harvard
HOME
          
LATEST STORY
Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
ABOUT                    SUBSCRIBE
April 18, 2018, 10:22 a.m.
Aggregation & Discovery

Truth Goggles are back! And ready for the next era of fact-checking

“Why can’t we use the Cambridge Analytica [method] for good, to help people actually know good things?”

The Truth Goggles are back — though now they’re more like prescription contact lenses.

It’s not the name of a funky band of journalists, at least not one with musical instruments. Dan Schultz, Ted Han, and Carolyn Rupar are part of the Bad Idea Factory’s crew for reprising Schultz’s MIT Media Lab 2011 graduate thesis project: Truth Goggles, which aimed to help readers isolate suspicious claims in news articles and determining their truthfulness or truthiness. But now they’re switching it up.

“We want to figure out how to get partisan readers to engage with content: How do we package credible content in a pill that partisan readers are going to be willing to swallow? And that’s partisan of all types — that’s the key,” Schultz said. “How do we use technology to help people think about their audience, in the same way that political advertisers really have weaponized? Journalists have not — but why not? Why can’t we use the Cambridge Analytica [method] for good, to help people actually know good things? It’s easier to trigger people’s defenses than to navigate their defenses…Algorithmically there’s a lot of information about people; can we use that to make credible experiences instead of manipulative experiences?”

Truth Goggles 2.0 is part of the Tech & Check Initiative led by the Duke Reporters’ Lab and supported with funding from the Knight Foundation, the Facebook Journalism Project, and the Craig Newmark Foundation. Back in 2012, Andrew Phelps — then a Nieman Lab staffer, now product lead for Apple Newsdescribed a test Schultz developed to measure the impact of Truth Goggles:

Here’s how it works. First you’re asked to evaluate the veracity of several statements. “Over 40 percent of children born in America are born out of wedlock.” “Some billionaires have a tax rate as low as 1 percent.” No Googling; you have 20 seconds for each claim.

Next you’re asked to read actual news articles that contain claims in PolitiFact’s database. In “highlight mode,” the claim in question is unobtrusively highlighted; click the text for PolitiFact’s evaluation. In “goggles mode,” all of the text that follows the first highlighted claim is blurred out, making it impossible to read on without engaging the claim first. In “safe mode,” all highlighted phrases are blocked out, forcing the user to reveal each one by one.

Finally, you’re shown many of the same claims again — this time without the goggles — and asked to rate their trueness. Schultz wants to see whether his software influences the way people process information and make conclusions.

The idea found support in the journalism world, but it was, alas, hard to monetize and Schultz was re-entering the workforce after grad school. Now he’s employed at the Internet Archive as a senior software engineer, where he’s also working on the Glorious ContextuBot with folks from Trint and Hyperaudio.

But recently, the Reporters’ Lab’s Bill Adair approached Schultz with the opportunity to bring Truth Goggles back to life. “When Dan tried the first version of Truth Goggles, there was a relatively small database of fact-checks. But the number has grown a lot because of the ClaimReview schema and our Share the Facts widget. A larger corpus of fact-checks means Truth Goggles is more likely to find a match,” said Adair, who described the Tech & Check setup for a previous Nieman Lab piece.

“I think of Dan’s project as the R&D lab of Tech & Check. Dan is going to experiment with different ways to present fact-checks so they can have broader appeal,” he added. “I like his metaphor that that fact-checker should be like your drinking buddy — a friend who can gently suggest you consider the facts even if the facts run counter to your political beliefs.”

Schultz turned to Han, of DocumentCloud engineering fame, and Rupar, who has a background in public health. Today is a different environment for journalism than it was (for many obvious reasons) in 2011 when the first iteration of Truth Goggles entered the world, and Schultz assembled a team of brains to tackle it. (They’re all part of the Bad Idea Factory, a collective of spunky thinkers in technology and problem-solving.)

“I come at this from a cognitive, linguistics-degree sort of perspective,” Han explained. “This thing I actually care about is what kind of conversations are people having around the news, and can they trust the news. The news-gathering process is supposed to be, like, ‘journalist goes out into the world, journalist investigates a bunch of things, finds a bunch of facts, then they write them down and present them to people.’ And that’s supposed to have an impact on people’s lives, change policy, and all that sort of stuff. So, [I’m] thinking about what role do projects like DocumentCloud or technologies generally speaking play in that process, and how is that stuff actually received and perceived.”

Over the past few months, Schultz, Han, and Rupar have been poring over relevant research in a literature review before revamping the project. But one thing is for sure: There’s no one-size-fits-all prescriptive lens for this.

As outward-facing fact-checking has become more and more prevalent as a journalistic duty, the question has shifted to how to convince people of those fact-checks. PolitiFact, founded by Adair ten years ago, has not been able to perfect a system for readers to accept the checks presented to them. (There’s another inherent flaw in the current fact-checking environment — that people still typically have to seek them out. The Reporters’ Lab has been testing fact-checking in live events, like with the State of the Union address, but the clock is ticking. I digress.) That’s where the contact lenses come in — taking into account the reader’s context and worldview — rather than bulky ill-fitting goggles.

Han and Schultz talked about the example of fact-checking claims about guns — but specifically how “guns” is more used by liberals and “firearms” is the word of reference for conservatives, they said.

“Are there things like that where we could…help use the way that things are presented to build an establishment of value of a person’s identity before presenting the thing that challenges their identity?” Schultz said.

The three are still gathering thoughts for what this specifically means and don’t yet know what form the end result will take. (The Knight grant goes until July 2019, though none are working on this full-time.)

How could others in the journalism world co-brainstorm or help? “I want stories of when people have gotten their readers to thinking-face emoji,” Schultz said. “What has worked for getting people to think hard about a hard problem, versus the emotional gut reaction that we often see on the Internet? That is the challenge: If you’re reading a fact check and it triggers an emotional gut reaction, a visceral ‘I hate this’ or a visceral ‘of course,’ in either case that person is not being thoughtful anymore.”

Photo of optometry equipment by the U.S. Air Force.

POSTED     April 18, 2018, 10:22 a.m.
SEE MORE ON Aggregation & Discovery
SHARE THIS STORY
   
 
Join the 50,000 who get the freshest future-of-journalism news in our daily email.
Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
“In India, citizens actively seem to be privileging breadth of information over depth…Indians at this moment are not themselves articulating any kind of anxiety about dealing with the flood of information in their phones.”
Facebook probably didn’t want to be denying it paid people to create fake news this week, but here we are
Plus: WhatsApp pays for misinformation research and a look at fake midterm-related accounts (“heavy on memes, light on language”).
How The Wall Street Journal is preparing its journalists to detect deepfakes
“We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?”