Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Jan. 12, 2021, 10:36 a.m.
Audience & Social

Google is giving $3 million to news orgs to fact-check vaccine misinformation

Projects that demonstrate “clear ways to measure success” and aim to reach groups “disproportionately affected by misinformation” will be prioritized.

The Google News Initiative announced it will give $3 million to news and fact-checking organizations in an effort to combat misinformation about the Covid-19 vaccine.

The tech giant previously awarded fact-checking projects $6.5 million in April and $1.5 million in December, so Tuesday’s announcement brings the grand total flowing from their well-stocked coffers to organizations debunking lies about the pandemic to $11 million. (This is separate from the $1 billion that Google has promised publishers for a new feature called Google News Showcase.)

It’s a lot of cash. So who is getting the money — and will fact-checking be enough?

First Draft, Comprova, Full Fact, Maldita.es, Correctiv, Chequeado, PolitiFact, Kaiser Health News, SciLine, Stanford University, The International Fact-Checking Network, Science Feedback, and Data Leads received funds this spring. In early December, The Australian Science Media Centre and the technology non-profit Meedan got $1 million to create a COVID-19 Vaccine Media Hub that will serve as “a resource for journalists, providing around-the-clock access to scientific expertise and research updates.” (The landing page says “Coming Soon.”)

Google’s new fund is open to news organizations of any size, as long as they can demonstrate experience with debunking false information or form a partnership with a recognized fact-checking organization. Projects that demonstrate “clear ways to measure success” and aim to reach groups “disproportionately affected by misinformation” will be prioritized, Google’s news and information credibility lead, Alexios Mantzarlis, wrote in Tuesday’s announcement. “Eligible applications might include a partnership between an established fact-checking project and a media outlet with deep roots in a specific community, or a collaborative platform for journalists and doctors to jointly source misinformation and publish fact checks.”

Mantzarlis — who served as director for Poynter’s International Fact-Checking Network and managing director of the Italian fact-checking organization Pagella Politica before arriving at Google — notes that immunization misinformation is “a perennial problem,” one that predates (and will very likely outlast) Covid-19. I asked him about the application process, whether fact checks are sufficient in the face of our infodemic, and handling partisan objections to fact-checking efforts.

Sarah Scire: You write that applicants with “clear ways to measure success” will be prioritized. Can you give an example?

Alexios Mantzarlis: I’m wary of giving a specific example to avoid suggesting there’s a “right way.” What we’re really asking is that applicants be very explicit about what they’re hoping to achieve and why that matters. At the very minimum, I hope applicants will find ways to go beyond raw volume of people reached through their project and into more detail about how fact checks were received, whether the audience overlapped at all with those who’d seen related misinformation, and how else this affected the evidence base in public discourse.

Scire: What groups have you identified as “disproportionately affected by misinformation”?

Mantzarlis: Here, too, we’re going to avoid being prescriptive, given the global nature of this fund. We have a great jury of experts from all over the world and I’m hopeful that they’ll help us navigate this important question carefully on a case-by-case basis.

That said, the Open Fund’s invitation to consider the unequal effect of misinformation has at least two vectors: the first is that certain populations, namely the elderly, are more vulnerable to Covid-19 and therefore more vulnerable to related misinformation should it lead them to abjure the appropriate precautions. The second is that subsets of populations globally have been specifically singled out by Covid-19 misinformation — I note that in India fact-checkers have debunked several Covid-19 related hoaxes that targeted the Muslim population specifically.

Scire: I’ve been thinking a lot about a prediction Whitney Phillips wrote for us here at Nieman Lab. She argues facts “aren’t reliably corrective in and of themselves” – especially when believers occupy “a totally different ideological paradigm” than the fact-checkers or “debunkers.” I’m curious how you think about that challenge? What would you say the limitations of fact-checking are?

Mantzarlis: That something is insufficient does not mean that it is (a) unnecessary or (b) ineffective.

Here’s what we know from a meta-analysis of fact-checking: “Simply put … the beliefs of the average individual become more accurate and factually consistent, even after a single exposure to a fact-checking message.” At the same time, “the effects of fact-checking on beliefs are quite weak and gradually become negligible the more the study design resembles a real-world scenario of exposure to fact-checking.”

I take this to mean that we should support fact-checking as a standalone form of journalism because it matters and can have an incremental impact, while at the same time — and this ties back to your first question — finding new ways to assess its real-world impact.

Facts alone won’t do it. But no facts at all is worse.

Scire: The process of fact-checking has been subjected to partisan criticism. Do you feel pressure — externally or internally — to avoid taking stances that will be interpreted as anti-conservative? Is that something you’ve seen publications or individual fact-checkers struggle with?

Mantzarlis: As a former fact-checker, I do have strong feelings here.

On the one hand, I think fact-checkers need to be extraordinarily transparent about their motivations, their methods and their mistakes. That’s even more important when their adjudications have consequences beyond the confines of their own audiences and bleed into how platforms moderate content.

On the other hand, I believe that most efforts to portray fact-checking as a partisan endeavor are a blatant attempt at working the referee that should be resisted. Especially when it comes to topics of life and death.

You can read Google’s full announcement here. Organizations interested in applying can find eligibility requirements and more information at the open fund’s website.

Vaccine photo by Self Magazine used under a Creative Commons license.

Sarah Scire is deputy editor of Nieman Lab. You can reach her via email (sarah_scire@harvard.edu), Twitter DM (@SarahScire), or Signal (+1 617-299-1821).
POSTED     Jan. 12, 2021, 10:36 a.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”