Twitter  “Not white. Not male. Fast”: John Cook addresses what’s happening and who they’re looking for at The Intercept nie.mn/1qAw8Xp  
Nieman Journalism Lab
Pushing to the future of journalism — A project of the Nieman Foundation at Harvard
True and false

Hacking for truth, whatever that is: Ideas to fight misinformation

Give a man a fact, or teach him how to fact-check? Truthiness can be more pervasive, and more powerful, than truth.

True and false

After a day of deliberating on Big Ideas — what is truth? how do we defeat its adversaries? what if they’re robots? — the academics and technologists at the Truthiness in Digital Media conference gathered Wednesday at M.I.T. to drum up real-world solutions to tractable problems. (The conference, co-hosted by Harvard’s Berkman Center and the Center for Civic Media, generated a lot of interesting blog posts. I live-blogged the event here.)

Facebook ads that target people likely to believe in political myths and hit them with facts. Common APIs for fact-checking websites. A data dashboard for journalists that guides the writing process with facts about subjects that appear in the text. “Pop-up” fact-checking that interrupts political ads on television.

While en route to the Truthiness hack day, I ran into Matthew Battles, managing editor at Harvard’s metaLAB and a Nieman Lab contributor. He had an idea for some kind of game, call it “Lies With Friends,” that would play on the joy in lying and, maybe, teach critical thinking. It would be like an online version of the icebreaker Two Truths and a Lie. Or maybe Werewolf.

We gathered a group to sketch ideas for two impossibly brief hours. First, the parameters: Would this game be played against the computer, a human opponent, multiple opponents? We settled on the latter, which fits with a socially networked world. Maybe it would be a Facebook app.

Would the game evaluate the veracity of players’ claims, or merely their persuasiveness? We picked the latter, because persuasiveness — not truth — is what wins debates. To quote Harry Frankfurt, author of On Bullshit (PDF):

Bullshitters aim primarily to impress and persuade their audiences, and in general are unconcerned with the truth or falsehood of their statements.

Indeed, that’s what Stephen Colbert’s “truthiness” means: “the quality by which one purports to know something emotionally or instinctively, without regard to evidence or intellectual examination,” he said in his show’s first episode. What we want to be true, not necessarily is true. Truthiness is a more effective political tool than truth.

That theme ran throughout Tuesday’s conference. It’s why fact-checking websites have not extinguished misinformation and have become themselves political weapons. Even Kathleen Hall Jamieson, founder of FactCheck.org, has argued fact-checking may perpetuate lies by restating them.

Truth is irrelevant. “We’re trying to ensure fidelity to the knowable.”

On Tuesday morning, Jamieson helped frame the conversation: “What is truth? That is an irrelevant question!” she said. “We’re trying to ensure fidelity to the knowable. That is different from the larger world of normative inferences about what is true and what is false. What is desirable and what is good is not the purview of FactCheck.org.”

Chris Mooney, author of “The Republican Brain,” said he used to be wedded to the Enlightenment view, that if you put forth good information and argue rationally, people will come to accept what is true. The problem, he said, is that people are wired to believe facts that support their worldview. It’s called cognitive bias.

Mooney described a phenomenon he calls the “smart idiots effect” — “the fact that politically sophisticated or knowledgeable people are often more biased, and less persuadable, than the ignorant.” He cited 2008 Pew data that showed Republicans with college degrees were more likely to deny human involvement in climate change than Republicans without — but the effect was opposite for Democrats.

Mooney turns to Master Yoda for wisdom: “You must unlearn what you have learned.”

Brendan Nyhan, a Dartmouth professor and media critic, conducted his own research: He gave a group of students bad information (President Bush banned stem-cell research) — and then provided a correction (no, he didn’t). Liberal students’ minds were unchanged, while conservatives were more likely to accept the correction as true. He calls this disconfirmation bias. Being told you’re wrong threatens our worldview and makes us defensive.

“People are much more likely to retweet what they want to be true.”

“In our effort to combat misinformation, if we’re not careful, we can actually make the problem worse,” he said.

Gilad Lotan, the chief scientist at SocialFlow, has access to a wealth of data from Twitter’s “firehose.” He demonstrated that false information often, but not always, spreads wider and faster than the eventual correction.

Take the case of @NBCNewYork erroneously tweeting that NYPD had ordered news choppers out of the air during Occupy Wall Street, less than a week after protesters were evicted from Zucotti Park. It was just the ammunition the OWS supporters needed, evidence of the NYPD’s evil, anti-speech tactics.

The NYPD replied on Twitter that it has no such authority, but it was too late. The data tells all:

Graph showing spread of erroneous tweet versus corrective tweet

The erroneous tweet appears to have spread faster and farther and had a longer tail than the correction. “People are much more likely to retweet what they want to be true,” Lotan wrote.

So that gets us back to the game. We wanted to create a game that challenges our cognitive biases and stimulates skepticism. (And hopefully a game that would be fun.)

We didn’t get very far on the execution. Perhaps a player would select a category and the computer would present two pieces of trivia, asking for the user to write in a third. Those items would be passed on to the next player, who would have to pick which claim to believe. Maybe it would be timed. We would build an experience that rewards players both for advancing claims (true, false, or otherwise) and for calling out bullshit. A leaderboard would show which claims traveled the longest and farthest and which friends were the most critical thinkers.

The hack day was more like a think day, one that I hope moves us to actually build something. I would like to try, when I get free time, but I would be delighted to see someone else beat me to it.

                                   
What to read next
joseph-pulitzer
Mark Coddington    April 18, 2014
Plus: The pushback against Vox and The Intercept, Twitter’s data buy, and the rest of this week’s news and tech must-reads.
  • Anonymous

    Relevant factors to add to the puzzle could be “relative” vs “absolute” truth  and examination of motivation behind decision to spread (or not) the info. (Both concepts stemming from Buddhism.) 
    Also, threshold for level of sureness of info for when it’s “sure” enough to act on, or when it’s still in the realm of hypothesis for discussion/debate, needing further “verification”. Terms should be defined to distinguish information as to level of “sureness” by the propagator. (Criteria for determining level of sureness, things such as: perceived reputation of the sender, verifiability, personal experience…)

  • http://generoseberry.com Gene

    There is also RbutR going into beta. People should check it out.

  • http://ducknetweb.blogspot.com/ Medicalquack

    Very good and goes along with my series called “The Attack of the Killer Algorithms”…started this by accident but being a former coder and healthcare blogger I’m seeing it all the time.  I’ll give you the link to public examples here and you can go from there on reading the chapters.  Also I included a great video from a Twitter bud, NYU Professor Siefe given at Google NY and it’s well worth listening too and he also wrote the book, Proofiness, the Dark Arts of Mathematical Deception” and that’s worth a read as well.  

    http://ducknetweb.blogspot.com/2012/02/attack-of-killer-algorithmsdigest-links.html

    Being a former code head you can completely visualize how folks would do this for profit and pull the wool over the eyes of many. 

  • Anonymous

    “People are much more likely to retweet what they want to be true,” Great insight. I was contemplating BS detection and it is much more than simple BS. From my observations since the birth of the internet is it has become much more useful to categorize content according to George Carlin criteria. Stupid, full of shit or nucking futz.  http://www.youtube.com/watch?v=w85t5wxxamk