Nieman Foundation at Harvard
HOME
          
LATEST STORY
What audiences really want: For journalists to connect with them as people
ABOUT                    SUBSCRIBE
May 15, 2020, 8:31 a.m.
Audience & Social

Unvetted scientific research about COVID-19 is becoming a partisan weapon

Plus: Conspiracy theories on TikTok, and “over one-quarter of the most viewed YouTube videos on COVID-19 contained misleading information.”

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“The dangers of open-access science in a pandemic.” Preprint servers make it easy for scientists to share academic research papers before they are peer-reviewed or published, and COVID-19 is leading to a flood of research being uploaded. That can be a good thing, getting new and cutting-edge research into decision-makers’ hands quickly, writes Gautama Mehta in Coda Story. But it can also spread misinformation.

As of Thursday evening, medRxiv (pronounced “med archive”), a preprint server run in partnership by BMJ, Yale University, and Cold Spring Harbor Laboratory, had 2,740 COVID-19–related papers.

It’s not as if just anyone can upload anything to a preprint server. There are screening processes in place, Diana Kwon reported in Nature last week, and those have been enhanced in light of COVID-19:

BioRxiv and medRxiv have a two-tiered vetting process. In the first stage, papers are examined by in-house staff who check for issues such as plagiarism and incompleteness. Then manuscripts are examined by volunteer academics or subject specialists who scan for non-scientific content and health or biosecurity risks. BioRxiv mainly uses principal investigators; medRxiv uses health professionals. Occasionally, screeners flag papers for further examination by Sever and other members of the leadership team. On bioRxiv, this is usually completed within 48 hours. On medRxiv, papers are scrutinized more closely because they may be more directly relevant to human health, so the turnaround time is typically four to five days.

Sever emphasizes that the vetting process is mainly used to identify articles that might cause harm — for example, those claiming that vaccines cause autism or that smoking does not cause cancer — rather than to evaluate quality. For medical research, this also includes flagging papers that might contradict widely accepted public-health advice or inappropriately use causal language in reporting on a medical treatment.

But during the pandemic, screeners are watching for other types of content that need extra scrutiny — including papers that might fuel conspiracy theories. This additional screening was put in place at bioRxiv and medRxiv after a backlash against a now-withdrawn bioRxiv preprint that reported similarities between HIV and the new coronavirus, which scientists immediately criticized as poorly conducted science that would prop up a false narrative about the origin of SARS-CoV-2. “Normally, you don’t think of conspiracy theories as something that you should worry about,” [medRxiv and bioRxiv cofounder Richard Sever] says.

These heightened checks and the sheer volume of submissions has meant that the servers have had to draft in more people. But even with the extra help, most bioRxiv and medRxiv staff have been working seven-day weeks, according to Sever. “The reality is that everybody’s working all the time.”

MedRxiv has a disclaimer at the top of the search page: “medRxiv is receiving many new papers on coronavirus SARS-CoV-2. A reminder: these are preliminary reports that have not been peer-reviewed. They should not be regarded as conclusive, guide clinical practice/health-related behavior, or be reported in news media as established information.” But publications don’t always heed that guidance — as in the case of a much-tweeted (and then much-criticized) LA Times article claiming there was a new mutant strain of the virus, for instance.

One of the issues that factors into media coverage of preprints is that the journalists covering the coronavirus are not always science reporters. [Fiona Fox, head of the UK’s Science Media Center] told me that many of the people now reporting about preprint studies have been taken off their usual beats and “have no idea what peer review is and have no idea what a preprint is, and are having to cover this because there’s no other story in town.”

This plays into another problem posed by preprint servers: they are essentially dumps of information which require scientific expertise to adjudicate or contextualize. “Everything comes out as it’s received,” [Derek Lowe, who covers the pharmaceutical industry], told me. “There is no way to know what might be more interesting or important, and no way to find it other than by using keyword searches. It really puts people back on using their own judgment on everything at all times, and while that should always be a part of reading the literature, not everyone is able to do it well.”

Jonathan Gitlin wrote for Ars Technica earlier this month:

If a paper posted to arXiv regarding a particular flavor of subatomic particle turns out to be erroneous or flawed, no one’s going to die. But if a flawed research paper about a more contagious mutation of a virus in the middle of a global pandemic is reported on uncritically, then there really is the potential for harm.

Indeed, this is not an abstract fear. We are in the middle of a global pandemic, and a recent study in The Lancet found that much of the discussion (and even policymaking) about COVID-19’s transmissibility (also known as R0) during January 2020 was driven by preprints rather than peer-reviewed literature.

Nobody claims that the conventional peer-review process is perfect. And “a kind of de facto real-time peer review has emerged in the comment sections of preprint studies, as well as in discussions on Twitter,” Mehta notes. “These are precisely the places where large numbers of scientists gathered to discuss the flaws in the Indian study on similarities between the coronavirus and HIV before it was retracted.”

In the worst-case scenrios, though, scientific research may also be becoming a partisan weapon, Northeastern’s Aleszu Bajak and Jeff Howe wrote in The New York Times this week.

Conspiracy theories and election disinformation on TikTok. Rolling Stone’s EJ Dickson found COVID-19–related conspiracy theories in abundance on TikTok:

Some of the most popular videos exist at the nexus of anti-vaccine and anti-government conspiracy theorist content, thanks in part to the heightened presence of Qanon accounts on the platform. One video with more than 457,000 views and 16,000 likes posits that Microsoft’s founding partnership in digital ID program ID2020 Alliance is targeted at the ultimate goal of “combining mandatory vaccines with implantable microchips,” with the hashtags #fvaccines and #billgates. Another popular conspiracy theory, among evangelicals in particular, involves the government attempting to place a chip inside unwitting subjects in the form of a vaccine. Some Christians view this as the “Mark of the Beast,” a reference to a passage in Revelations alluding to the mark of Satan. The #markofthebeast hashtag has more than 2.3 million combined views on TikTok, and some videos with the hashtag have likes in the tens of thousands.

On The Verge’s podcast this week, Alex Stamos, director of the Stanford Internet Observatory and former chief security officer for Facebook, talked about bad actors on TikTok (“If I was the Russians right now, I would put all of my money, all of my effort behind TikTok and Instagram”).

“Over one-quarter of the most viewed YouTube videos on COVID-19 contained misleading information.” Researchers in Ottawa screened the top COVID-19–related, English-language YouTube videos on March 21 and found that more than a quarter of them contained inaccurate information. The sample size here is small: The reachers started out with an original set of 150 videos, but after screening for duplicates, non-English language, a lack of audio, and so on, they had 69 videos to work with; those videos had been viewed over 62 million times. More than 25 percent of the videos contained “non-factual” information — including inaccurate statements (“A stronger strain of the virus is in Iran and Italy,” racism (“Chinese virus”), and conspiracy theories. “Government and professional videos” contained factual information but “only accounted for 11 percent of videos and 10 percent of views.”

Illustration by Andrey Osokin used under a Creative Commons license.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     May 15, 2020, 8:31 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
What audiences really want: For journalists to connect with them as people
Plus: How newsrooms are using generative AI, what makes news seem authentic on social media, and how to bridge the divide between academics and journalists.
When the winner’s name isn’t enough: How the AP is leaning into explanatory journalism to call races
“We’ve learned, especially in the last few cycles, that it’s not necessarily possible or a good idea to let [the electoral] process play out in silence.”
Votebeat assembles nearly 100 election experts to answer reporters’ questions (now, and in the weeks ahead)
“The problem with voting stories is that the people who make themselves most available don’t know what the hell they’re talking about.”