True or false? No googling.
“The total unemployment rate for Hispanic or Latino workers has increased from 10% to 10.3%” between January 2009 and March 2012.
Now, what if I told you President Obama uttered those words? Do you trust the statistic more or less? What if Mitt Romney said it?
The claim is neither true nor false, really; truth is three-dimensional. For the answer, click here to activate Truth Goggles.
Click the text Truth Goggles highlights and you’ll see that PolitiFact rated the claim (it was Romney’s) as “mostly false.”1 It is true the unemployment rate for Hispanics and Latinos rate rose during that period, but the numbers actually fell if February 2009 — Obama’s first full month in office — is used as the baseline.
Imagine if every factual claim were highlighted in news articles — true, false, or otherwise. The gap between consumption and correction of bad information effectively would be reduced to zero. That’s the goal of Truth Goggles, a tool created by MIT master’s graduate Dan Schultz. (Go ahead and drag this Truth Goggles link to your bookmarks bar and try it around the web.) Truth Goggles draws on PolitiFact’s database of about 5,500 fact-checked claims and flags any matches in the article you’re reading.
Schultz is now working as a Knight-Mozilla OpenNews fellow at The Boston Globe, where he will try to continue developing the project part-time. Bill Adair, the editor of PolitiFact, said his operation is considering adopting the source code and building a PolitiFact-branded version of Truth Goggles.
Schultz created this project as his thesis project at the MIT Media Lab. He identified three major technology problems that need to be solved or improved for Truth Goggles to be a fully functional, user-friendly product and recently shared them with me.
You’re unlikely to see Truth Goggles work on the vast majority of news articles. Truth Goggles matches only exact instances of fact-checked phrases. Taking the example from the top, a reporter could have written: Romney said the unemployment rate for Hispanics has increased from 10 percent to 10.3 percent since President Obama took office. That sentence would be invisible to Truth Goggles.
Figuring this out is the Holy Grail of automated fact-checkers, Schultz said. Natural language processing is advancing in its quest for code to understand language the way we do, but truly reliable NLP is a long way off. And if the software gets close but still messes up, highlighting the wrong claim would just confuse the user.
Truth Goggles is limited to those claims which PolitiFact has checked — an impressive corpus of journalism, sure, but a wimpy number compared to all of the things politicians have ever claimed. You could add FactCheck.org’s database to the mix. And Snopes, if it ever released an API. Say that gets the number up to 15,000. “That’s not nearly enough to create a system that will be actually relevant on a regular basis,” Schultz said. “Let’s say everything was perfect…you’d still rarely see a highlight.”
This is a problem with fact-checking, not fact-checking software. It can take days to verify a claim that leaves a politician’s lips in seconds. By the time PolitiFact publishes a judgment, that particular claim may be old news. Or it might not have made the news at all. Or maybe it didn’t made the transition from words in a video to words in text. I googled several dozen claims in search of news articles that included those claims — I wanted to blockquote a real article for the lead of this story, instead of a hypothetical. It was all but impossible. Virtually every result is a fact-check of the claim, or people linking to a fact-check of the claim, or a transcript of whatever it was the claim appeared in — rather than the false claim itself. So Truth Goggles will not work on most articles, because journalists aren’t writing stories about every claim. (And that’s a good thing, right?)
Setting aside the back-end wizardry, the front-end design of Truth Goggles proved to be a massive project of its own. For Truth Goggles to work, the software has to interrupt a user’s reading without driving him or her crazy.
Schultz conducted a user study in which he presented three interfaces: “Goggles Mode,” which blurs all of the text following the first highlighted claim; “Safe Mode,” which blocks out claims until a user clicks each one to reveal it; and “Highlight Mode,” which highlights claims in yellow while leaving the other text alone. Seventy percent of participants selected “highlight mode” when given the choice. (Schultz stresses his user study was not very scientific, since people probably wanted to play with all of the options.)
Then there is the matter of color. Truth Goggles always highlights text in neutral yellow. Red and green are automatic cues — False! True! — which can defeat the purpose of the software. Red and green are so final, literally opposites on the color spectrum. That reflects the false polarity of truth, not the continuum. (In fact, PolitiFact uses six flavors of “true” and “false.”)
If I’m an Obama supporter and I see that Romney claim highlighted in red, I only become more deeply entrenched. I might be less inclined to click on the claim to learn more. I might not want to click on it.
“I didn’t want it to be possible for people to become less thoughtful,” Schultz told me. “You’re in a spot where you don’t have to take any more action as to why it’s false. Plus, PolitiFact can make mistakes. It does update its judgments from time to time. “If you highlight something red as false, and you made a mistake, that is much more damaging than highlighting something as yellow and saying, ‘This has been fact-checked,’” Schultz said.
Indeed, the people problems might prove more daunting than the technology problems. “This is the great challenge in political journalism that, to use a different eyewear metaphor, people see things through their own partisan prisms,” Adair said.
“Even if you are a nonpartisan fact-checker, you’re going to anger one or both sides, and that’s the nature of this disruptive form of journalism. And at a time when people are going into echo chambers for their information, it can be a challenge. The one thing I would say to that is I don’t think what we’re doing is telling people what to think. We’re just trying to tell them information to consider.”
That was the biggest lesson Schultz said he learned: “Trying to tell people what to think is a losing battle,” he wrote in a blog post. The winning battle is telling people when to think.
Photo by photobunny/Earl used under a Creative Commons license.