Nieman Foundation at Harvard
HOME
          
LATEST STORY
What journalists and independent creators can learn from each other
ABOUT                    SUBSCRIBE
Jan. 8, 2024, 2:58 p.m.

Asking people to “do the research” on fake news stories makes them seem more believable, not less

A new study asked thousands to evaluate the accuracy of news articles — both real and fake — by doing some research online. But for many, heading to Google led them farther from the truth, not closer.

When it comes to digital spreaders of misinformation, social media platforms typically get the brunt of the blame. After all, they’re the places with the black-box algorithms, the propagandist bots, and the partisan screamers. On social media, we can watch the bad information spread, in real time. Social media is, at least metaphorically, a passive experience — a place where the news finds you.

But what’s the most enraging thing your uncle can say at Thanksgiving, right after he tells you about how the Rothschilds are behind an army of Colombians set to invade Idaho next week? “I’ve done my own research on this.” In other words:

I’m not just some guy who believes everything in his News Feed. No, I put in the work, double-checking everything Ezr4P0und2024 says about the Protocols. I’m telling you, he’s onto something — it all fits.

There’s something about this appeal to self-authority — that these ideas didn’t just happen to him, that he worked in some sense to rediscover them himself — that’s incredibly powerful.

Media literacy proponents often advocate doing your own research — emphasis on the search, since Google is typically Tool No. 1 here — as a weapon against misinformation. Search horizontally, they say — opening new tabs to seek confirmation or debunking — rather than keep scrolling vertically.

But what if, rather than yanking you toward the light, doing your own research leads you deeper into the information dark?

That’s the question raised by an interesting new paper that was published over the winter break in Nature. Its title is “Online searches to evaluate misinformation can increase its perceived veracity,” and its authors are Kevin Aslett, Zeve Sanderson, William Godel, Nathaniel Persily, Jonathan Nagler, and Joshua A. Tucker. (Lead author Aslett is at the University of Central Florida and Nate Persily is at Stanford. The other four authors are all at NYU.) Here’s the abstract; emphases, as usual, are mine:

Considerable scholarly attention has been paid to understanding belief in online misinformation, with a particular focus on social networks. However, the dominant role of search engines in the information environment remains underexplored, even though the use of online search to evaluate the veracity of information is a central component of media literacy interventions.

Although conventional wisdom suggests that searching online when evaluating misinformation would reduce belief in it, there is little empirical evidence to evaluate this claim. Here, across five experiments, we present consistent evidence that online search to evaluate the truthfulness of false news articles actually increases the probability of believing them. To shed light on this relationship, we combine survey data with digital trace data collected using a custom browser extension. We find that the search effect is concentrated among individuals for whom search engines return lower-quality information.

Our results indicate that those who search online to evaluate misinformation risk falling into data voids, or informational spaces in which there is corroborating evidence from low-quality sources. We also find consistent evidence that searching online to evaluate news increases belief in true news from low-quality sources, but inconsistent evidence that it increases belief in true news from mainstream sources. Our findings highlight the need for media literacy programmes to ground their recommendations in empirically tested strategies and for search engines to invest in solutions to the challenges identified here.

Read that again: “Consistent evidence” that, when you ask people to use search to sniff out fake news, they become more likely to believe it, not less. That’s dark stuff, man. And it’s a chance to reengage with the topic of data voids — a key element of how search engines can amplify misinformation rather than debunk it.

Because the phrasing can get a bit unwieldy, the authors use SOTEN as a shorthand for “searching online to evaluate news,” and I’ll do the same. The basics of each study were similar: A group of internet users were given a variety of news articles, many of which had been rated “false or misleading” by a group of six professional fact-checkers. They were asked to read the articles, then to SOTEN — to search online to evaluate the news story — and then to say whether they considered each story true or not. (Others were not asked to SOTEN and acted as controls.) The individual studies varied the methodology to tease out the strength of the effects they found:

  • Study 1 (3,006 participants) involved fresh articles — one published within the previous 48 hours. Their newness makes it less likely that fact-checkers and other outlets would have had time to debunk anything that needs debunking.
  • Study 2 (4,252 participants) also involved just-published articles, but participants were asked to evaluate them twice — once just after reading it and once after SOTENing their hearts out. “If we assume that the respondents have a bias towards consistency,” the authors write, “this offers an even stronger test than in study 1 because, to find a search effect, the respondents would have to change their previous evaluation.”
  • Study 3 (4,042 participants) was like Study 2, but this time the articles to be evaluated were older — published 3 to 6 months ago. Over that timespan, one would hope the internet’s immune system would’ve had some time to kick in and seed those search results with accurate information.
  • Study 4 (1,130 participants) was also like Study 2, with freshly published articles — but this time those stories were about a highly salient topic, COVID-19. (Study 4 was done in June 2020.) Would people’s online research produce better outcomes if they’re searching about something that’s important to their lives, like an ongoing pandemic?
  • Finally, Study 5 (1,677 participants) was like Study 1, but with a key technological difference. Participants were also asked to download a browser extension that would track their web-surfing behavior before they render a verdict on an article’s accuracy — including the URLs their Google searches produced during their research.

That’s a lot. And the results? They weren’t great for Team Search Engine:

Taken together, the five studies provide consistent evidence that SOTEN increased belief in misinformation during the point in time investigated. In our fifth study, which tested explanations for the mechanism underlying this effect, we found evidence suggesting that exposure to lower-quality information in search results is associated with a higher probability of believing misinformation, but exposure to high-quality information is not.

How much of a difference did SOTEN make? Study 1 found that asking people to research a false claim led to “a 19% increase in the probability that a respondent rated a false or misleading article as true,” compared to people who did no research at all.

How about Study 2, where people were asked to evaluate an article’s accuracy, then go SOTEN, then evaluate it again?

We found that, among those who first rated the false/misleading article correctly as false/misleading, 17.6% changed their evaluation to true after being prompted to search online (for comparison, among those who first incorrectly rated the article as true, only 5.8% changed their evaluation to false/misleading after being required to search online).

Among those who could not initially determine the veracity of false articles, more individuals incorrectly changed their evaluation to true than to false/misleading after being required to search online. This suggests that searching online to evaluate false/misleading news may falsely raise confidence in its veracity.

It sure does suggest that, yes! Study 3, where the articles being reviewed were older? Story age made virtually no difference (“18% more respondents rated the same false/misleading story as true after they were asked to re-evaluate the article after treatment”). Study 4, the one about COVID mid-pandemic? SOTEN brought “an increase in the likelihood of believing a false/misleading article to be true of 20%.”

These results are frustratingly consistent — 19%, 17.6%, 18%, 20%. There sure seems to be something about opening up a new tab to research a false story that makes it seem more believable, at least to a meaningful share of people.

Why would that be the case? One potential culprit is those aforementioned data voids. That term was coined by Microsoft’s Michael Golebiewski in 2018 “to describe search engine queries that turn up little to no results, especially when the query is rather obscure, or not searched often.”

Fake news stories, especially their wilder variants, are often spectacularly new. Part of their shock comes from presenting an idea so at odds with reality that their key words and phrases just haven’t appeared together online much. Like, before Pizzagate, would you have expected a Google search for “comet ping pong child rape pizza basement hillary” to turn up much? No, because that’s just a nonsense string of words without a bonkers conspiracy theory to tie it all together.

But imagine that, in 2016, someone heard that conspiracy theory in some remote tendril of the web and — seeking to SOTEN! — went to Google and typed those words in. What results would it have returned? Almost certainly it would have shown links to pro-Pizzagate webpages — because those were the only webpages with those keywords at the time. The bunkers always have a time advantage over the debunkers, and data voids are the result.

Low-quality publishers have been found to use search engine optimization techniques and encourage readers to use specific search queries when searching online by consistently using distinct phrases in their stories and in other media. These terms can guide users to data voids on search engines, where only one point of an unreliable view is represented. Low-quality news sources also often re-use stories from each other, polluting search engine results with other similar non-credible stories…

[Francesca Bolla] Tripodi shows how Google’s search algorithms interact with conservative elite messaging strategies to push audiences towards extreme and, at times, false views. This ‘propaganda feedback loop’ creates a network of outlets reporting the same misinformation and therefore can flood search engine results with false but seemingly corroborating information.

The topics and framing of false/misleading news stories are also often distinct from those covered by mainstream outlets, which could limit the amount of reliable news sources being returned by search engines when searching for information about these stories.

Finally, direct fact-checks may be difficult to find given that most false narratives are never fact-checked at all and, for stories that are evaluated by organizations such as Snopes or PolitiFact, these fact-checks may not be posted in the immediate aftermath of a false article’s publication. As a result, it would not be surprising that exposure to unreliable news is particularly prevalent when searching online about recently published misinformation.

That brings us to Study 5 — the one that combined the participants’ evaluations with their search histories. (In this study, SOTEN’s effect on making false stories seem true was even larger than in studies 1 through 4 — but the authors chalk that up to greater motivation by the participants who were being paid to install those browser extensions.)

Using digital trace data collected through the custom browser plug-in, we are able to measure the effect of SOTEN on belief in misinformation among those exposed to unreliable and reliable information by search engines. To this end, we measured the effect of being encouraged to search online on the belief in misinformation for our control group and two subsets of the treatment group: those who were exposed to Google search engine results that returned unreliable results (defined as at least 10% of links coming from news sources with a NewsGuard score below 60) or very reliable results (defined as the first ten links coming only from sources with a NewsGuard score above 85).

Roughly 42% of all evaluations in the treatment group fit in either of these two subsets; although subsetting the data in this way ignores 58% of the treatment group, we are interested in the effect of search among groups exposed to very different levels of information quality.

Broken down into those two groups, the negative effects of SOTEN disappear for those for whom Google returns high-quality results. If Google allocates those 10 blue links to legitimate news sites, people are not more likely to finish their research more likely to believe a bogus story. But if Google serves up a bunch of Gateway Pundit and Western Journal, some number of people will be led astray.

And those low-quality results are more common when people are researching a false story than a true one. Among participants who were asked to research a true story, only 15% got at least one low-quality site among their top Google results. Among those asked to research a false story, though, that percentage jumped to 38%.

But the presence of bogus results isn’t just about the accuracy of the story being researched. Even among people researching the same stories, some get high-grade links and some get junk. Why?

Confession time: When I first read that the negative effect of SOTEN “is concentrated among individuals for whom search engines return lower-quality information,” I was a little confused. (Are there some people who are only allowed to use Altavista? Does Google hold grudges against certain users and only give them janky results?) It turns out it’s a matter of the quality of people’s search terms. Their google-fu, you might say.

Another possible explanation is that individuals with low levels of digital literacy are more likely to fall into these data voids. Previous research has found that individuals with higher levels of digital literacy use better online information-searching strategies, suggesting that those with lower levels of digital literacy may be more likely to use search terms that lead to exposure to low-quality search results.

To assess the empirical support for these two potential explanations, we begin by investigating which individual-level characteristics are associated with exposure to unreliable news by fitting an OLS regression model with article-level fixed effects and standard errors clustered at the respondent and article level to predict exposure to unreliable news sites in the search results. We include basic demographic characteristics (income, education, gender and age) in the model.

Evidence from these results suggest that lower levels of digital literacy correlate with exposure to unreliable news in search results after conditioning on demographic characteristics.

So what do the low-google-fu people do to generate bad results? Often, they just literally plug the questionable article headline or URL into Google. If your search terms are “SHOCK REVELATION: SHILLARY CAUGHT DRINKING BABIES’ BLOOD AT MARTHA’S VINEYARD ESTATE,” Google’s probably going to give you results that go along with their assumptions.

To determine whether this affects the quality of search engine results, we coded all search terms for whether they contained the headline or URL of the false article. We found that this is indeed the case. Approximately 9% of all search queries that individuals entered were the exact headline or URL of the original article, and Fig. 3b shows that those who use the headline/lede or the unique URL of misinformation as a search query are much more likely to be exposed to unreliable information in the Google search results.

A total of 77% of search queries that used the headline or URL of a false/misleading article as a search query return at least one unreliable news link among the top ten results, whereas only 21% of search queries that do not use the article’s headline or URL return at least one unreliable news link among the top ten results…

Using the headline/lede as a search query probably produces unreliable results because they contain distinct phrases that only producers of unreliable information use.

An example: One article used in the study was headlined “U.S. faces engineered famine as COVID lockdowns and vax mandates could lead to widespread hunger, unrest this winter.”

The term ‘engineered famine’ in the article is a unique term that is unlikely to be used by reliable sources. An analysis of respondents’ search results found that adding the word ‘engineered’ in front of ‘famine’ changes the search results returned. 0% of search terms that contained the word ‘famine’ without ‘engineered’ in front of it returned unreliable results, whereas 63% of search queries that added ‘engineered’ in front of the word ‘famine’ were exposed to at least one unreliable result. In fact, 83% of all search terms that returned an unreliable result contained the term ‘engineered famine.’

One final ironic twist: The boost in believability that false articles get from SOTEN doesn’t seem to apply much to true articles — at least not true stories published in mainstream news outlets. Researchers did find cases when true stories published on low-quality, misinfo-heavy sites seem more believable after SOTEN. (Even a blind squirrel, etc.) But traditional news outlets get no such boost in believability. (The authors acknowledge a potential ceiling effect here.)

What does this all mean? Asking people to “do their own research” when confronted with questionable information isn’t a cure-all. It isn’t even a cure-most — it’s more likely to make someone believe a false story than it is the opposite. It’s not just about searching — it’s about searching well.

After all, one of the main mantras of the QAnon delusion was “do the research.” The education system has taught us to think of “doing the research” as an uncomplicated good. But just as “reading news” today means something wildly different than “reading a print newspaper” did 30 years ago, “doing the research” today often means dipping into a hyperpolluted sea of misinformation. It’s worlds away from flipping through the local library’s card catalog or cracking open a World Almanac. It takes skill to navigate that sea, and a lot of Americans aren’t great at it.

…our findings suggest that the strategy of pushing people to verify low-quality information online might paradoxically be even more effective at misinforming them.

For those who wish to learn more, they risk falling into data voids — or informational spaces in which there is plenty of corroborating evidence from low-quality sources — when using online search engines, especially if they are doing ‘lazy searching’ by cutting and pasting a headline or URL.

Our findings highlight the need for media literacy efforts combatting the effects of misinformation to ground their recommendations in empirically tested interventions, as well as search engines to invest in solutions to the challenges identified here.

Joshua Benton is the senior writer and former director of Nieman Lab. You can reach him via email (joshua_benton@harvard.edu) or Twitter DM (@jbenton).
POSTED     Jan. 8, 2024, 2:58 p.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”
What it takes to run a metro newspaper in the digital era, according to four top editors
“People will pay you to make their lives easier, even when it comes to telling them which burrito to eat.”