Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Feb. 2, 2018, 10:11 a.m.
Audience & Social

In Italy, at least, Facebook will let fact-checkers “go hunting” for fake news

Plus: Skepticism about this Biz Stone–backed fake-news-fighting startup may be warranted.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“The great thing is that this will work on anything.” The London-based startup Factmata, which uses AI to “protect people, advertisers, publishers and other businesses from deceptive or misleading content online,” has raised $1 million in seed funding from backers including Craig Newmark, Mark Cuban, and Twitter cofounder Biz Stone, reports TechCrunch’s Ingrid Lunden. The product, which is not launched yet, seems to include an ad-tech product that would screen out deceptive content; a tool that would work within existing social media platforms to “detect when something that is biased or incorrect is being shared and read”; and, according to Factmata’s website, a “new type of news website,” launching this year:

Users will be able to use our platform to do four things: Rapidly annotate and share news, curate and share collections of news content, and discover new content and track breaking news stories from high quality sources. At every step they’ll be aided by our monitoring platform, using cutting edge natural language processing and artificial intelligence to help spot rogue content.

Here’s a press release in which Biz Stone says, “[Factmata founder Dhruv Gulati] and Factmata are approaching the issue with exactly the right combination of big-thinking, focus, and cutting-edge science.”

Factmata has been around for a little while. It received €50,000 (USD $62,262) from the Google Digital News Initiative in November 2016. At that time, Gulati told The Guardianthat the company’s mission was to “make fact checking fun, engaging, and empowering” for everyday news consumers, which seems naive at this point since most people are never going to be interested in active fact-checking. Wired UK’s Rowland Manthorpe also spoke with Gulati last June. Here’s an excerpt from that piece that suggests that some skepticism about Factmata might be warranted:

Factmata is launching its first product on June 8, the day of the UK general election. This will be an extension for Google’s Chrome browser, designed to correct claims related to economic statistics. When Factmata’s text-reading software detects a statement about immigration or employment, the extension will bring up a link to the official government statistic in a little window next to the text, like a real-time footnote.

Gulati shows me a static example, based on a real-life exchange on Twitter. Someone called Chris Conyers is debating, in the futile way of social media, a man using a Republican elephant as his avi. ‘Ha!,’ spits Conyers, when the man, who goes by the name of Albert Parsons, claims Donald Trump will re-energise the US economy. ‘We’re in the longest stretch of economic growth in US history. 82 months.’

Factmata has highlighted those sentences in yellow and linked them to a World Bank chart of GDP growth. ‘The great thing,’ says Gulati, ‘is that this will work on anything. It will work on you and your mate talking about these issues in Facebook comments. We will pull up the chart from the World Bank and our sources of statistics and give you figures.’

I look at it, thinking how annoying it would be to have some automated know-it-all tap me on the shoulder to correct my Twitter exaggerations. I wonder what I’d do, if I was Chris Conyers or Albert Parsons. ‘Of course,’ I say jokingly to Gulati, ‘the World Bank is fake news.’

‘Now you’re asking me a trick question,’ Gulati replies. He appears unaware of the highly politicized nature of both the World Bank and the term ‘fake news.’ Later, we talk about Donald Trump’s so-called Muslim Ban — he doesn’t know that’s a contested term either. Nor does he know that election day is devoid of news, as UK law restricts coverage to uncontroversial titbits, such as the weather, or politicians’ appearances at polling stations.

If you’re launching then you’ll be too late to make a difference, I tell him.

‘Just put, we’ll release it for the election,’ he says — meaning, just put that in your article. ‘Anyway,’ he adds, ‘I don’t care. We’re releasing it on June 8.’

Red Herring, meanwhile, asked Stone if his interest in Factmata means that Twitter’s internal efforts to combat the spread of fake news have failed. “Leaders in social media are working hard to put the system back on track, but they realize that this cannot be done by themselves alone,” he answered. “I definitely think it’s a very positive sign that the founders of social media platforms are looking outwards to search for solutions and ideas to these very complex challenges.”

“Familiar and vaguely credible.” Here’s another reason Facebook’s “trust survey” may not work: People tend to believe that professional-sounding names are trustworthy sources even when they’re not. Bernhard Clemm, a researcher at the European University Institute in Florence, wrote for The Washington Post’s Monkey Cage blog this week about some of his research that suggests why this crowdsourcing won’t work, and offers a possible solution:

If Facebook’s goal is to take account of the trustworthiness of news media sources, then research suggests its trusted sources metric may not actually be able to do so. In the current form, it is likely to lead to overrating of anyone who manages to find a credible name and let those with partisan interests bias scores.

How could Facebook correct for this? By adjusting for the absolute level of familiarity. Looking back at my study, the trust averages of ‘Deutschlandfunk’ (a professional outlet) and ’24-aktuelles.com’ (a scam site) are very close to each other. But almost twice as many participants indicated their familiarity with the former. This factor could help Facebook to achieve its goal with its ‘trusted sources’ effort.

Facebook improves its fact checks ahead of the Italian election. Facebook has a fact-checking team working in Italy ahead of the March 4 parliamentary elections. The checking will be done by Italy’s Pagella Politica, but it’s different from similar, previous pre-election efforts in countries like France and Germany, notes The Washington Post’s Anna Momigliano:

For the first time, the fact-checkers will actively search for fake news instead of relying on alerts from users. ‘We will go hunting,’ Pagella Politica’s chief editor, Giovanni Zagni, said in a telephone conversation. And now, fake content will not be flagged by a red button. ‘From our previous experiences, we’ve learned that flagging content as fake news actually draws more attention on it,’ [Laura Bononcini, Facebook Italy’s head of public policy] said. Instead, an article by Pagella Politica that debunks the hoax will automatically appear as a related story next to the fake news item, said Zagni.

Facebook is holding a meeting with some of its fact-checking partners this month. And if you’re interested in more info about fake news in Italy, check the Reuters Institute’s research released this week, which I wrote up here.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Feb. 2, 2018, 10:11 a.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”