Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
Oct. 6, 2021, 11:11 a.m.
Audience & Social

“Interesting if true”: A factor that helps explain why people share misinformation

Plus: The struggle of structuring investigative journalism, crowdsourced journalism that works, and how news audiences are addressed in J-schools.

Editor’s note: Longtime Nieman Lab readers know the bylines of Mark Coddington and Seth Lewis. Mark wrote the weekly This Week in Review column for us from 2010 to 2014; Seth’s written for us off and on since 2010. Together they’ve launched a new monthly newsletter on recent academic research around journalism. It’s called RQ1 and we’re happy to bring each issue to you here at Nieman Lab.

A note before we start: We want to write about books, too, and we need your help! We’re interested in including short summaries of books based on academic research into news and journalism, and we’d love to have our readers contribute. So if you’ve read an academic book on news or journalism that’s come out in the past year or two — not your own — that you’d like to tell others about by writing a short summary, please let us know!

News, fake news, and interesting-if-true news

Is there a journalist among us who has not been tempted by a hot story tip that sounds slightly implausible but, hey, would be deliciously fascinating if true?

Imagining that, you can get an idea of why social media users might be inclined to share a news story with their friends that may not be clearly true news or false news but which, either way, would be really interesting if true.

new study in Digital Journalism explores that hypothetical by introducing this concept of interestingness-if-true — the quality of how interesting a piece of news would be if it were true — and testing how it might be connected to other factors (such as the perceived accuracy of a news item) that help explain why people might share news online, true or otherwise.

The authors — the Paris-based team of Sacha Altay, Emma de Araujo, and Hugo Mercier — conducted three experiments. In each, participants in the U.S. were shown 10 news stories (five true, five fake) in random order and asked to rate their accuracy and interestingness-if-true. They were also asked to signal how willing they would be to share those stories.

First, this may seem like a minor point, but it’s an important element of the study: The authors were able to validate that interestingness-if-true is, in fact, a distinct factor of its own — different from the more generic “interestingness.” As the authors explain: “the interestingness of a piece of news takes into account its accuracy, which is maximal if the news is deemed true, and can only decrease from there.”

So, if a story is believed to be true, its interestingness and interestingness-if-true converge — both are deemed relatively strong because of the confidence in the accuracy of the news item. By contrast, however, if a story is seen as being implausible, its interestingness suffers (because it’s fake, which makes it less relevant overall), even as its interestingness-if-true is likely to be higher.

Now, for some good news: In all three experiments, study participants saw the fake stories as being less accurate than the real ones. This confirms previous studies suggesting that laypeople on average can distinguish fake news from the real stuff. Additionally, in all three experiments, participants were more willing to share the true news items as well as news they perceived to be more accurate.

At the same time, however, participants across the studies also found fake news to be more interesting-if-true than fake news. Perhaps this is not that surprising; after all, it would be pretty interesting if it were true that “Bill Gates will use microchip implants to fight coronavirus” (as one of the fake news stories used in the experiment suggested).

In the end, the study sought to capture what motivates individuals to share news, all things being equal. And although people were more willing to share information they believed to be accurate, they were also clearly willing to share stories that were interesting-if-true. So, even though fake news was recognized by participants as being less accurate than true news (and therefore to some degree less relevant and shareworthy), the interesting-if-true factor complicated the calculation around sharing. It explained why “people did not intend to share fake news much less than true news.”

The upshot here: People may not always share news of dubious quality simply because they mistake it for real news; instead, maybe they decide that a story’s level of interestingness-if-true outweighs whatever concerns they have about accuracy before they hit the “share” button. In effect, certain fake stories may have “qualities that compensate for (their) potential inaccuracy, such as being interesting-if-true.”

Now, we should conclude, as the authors do, by putting all of this into its larger context. Fake news stories, as they note, represent at most 1% of people’s news diets, largely because most news consumers still rely fairly heavily on mainstream media. And yet, as the authors point out, how do we explain that in experiments such as the ones described here, participants often “declare a willingness to share fake news that is barely inferior to their willingness to share true news”?

There is, it seems clear, much still to learn about how people’s perceptions of relevance — in this case, not only what it’s interesting, but also what’s interesting-if-true — may drive their decision-making around what to read, what to believe, and what to share. And, more broadly, we should care to know how perceptions of relevance are influenced by the degree of quality, rigor, and overall “reality” (rather than fakery) that appears in the legacy press on which many people still rely.

Research roundup

“Between structures and identities: Newsroom policies, division of labor and journalists’ commitment to investigative reporting.” By Pauline Cancela, in Journalism Practice.

As much of the news industry continues to retrench and the amount of resources for reporting shrinks, investigative journalism has retained — or perhaps even enhanced — its venerated place in journalism’s professional imagination. Investigative journalism is, as the common thinking goes, rarer than it’s been in decades, which only increases its value as journalists try to justify their work to an increasingly skeptical public.

As Cancela’s study shows, it’s not only those dwindling resources that make investigative journalism difficult to sustain within modern news organizations, but it’s also investigative journalism’s venerated position within the profession. Cancela observed and conducted interviews at three Swiss news organizations with different models to incorporate investigative journalism, and, well, none of the models worked very well.

The reasons were different in each case, but all of them wrestled with the tensions between structures and policies meant to encourage investigative work on the one hand, and the inevitable tensions and constraints they produced on the other. When investigative reporters were put on their own team, resentment toward their privileged status festered throughout the newsroom.

When individual reporters were designated with the status to undertake investigative work in addition to their day-to-day reporting, they produced similar animosity — but they never got time to do the investigative work anyway. And more individual, ad hoc efforts at investigative work tended to fizzle without managerial support. Managers, Cancela concluded, need to ensure they’re fostering professional legitimacy and individual agency throughout the newsroom in order to make investigative journalism logistically and culturally sustainable.

“‘Crisis coverage gap’: The divide between public interest and local news’ Facebook posts about Covid-19 in the United States.” By Gina M. Masullo, Jay Jennings, and Natalie Jomini Stroud, in Digital Journalism.

Coverage of Covid-19 in the early days of the pandemic was ubiquitous (to the point that it led many consumers to pull back from news because they felt overwhelmed), but that doesn’t mean audiences were getting all the news they wanted. There’s a term for this mismatch in news coverage between journalists and their audiences: the “news gap,” a concept developed by Argentine scholars Pablo Boczkowski and Eugenia Mitchelstein.

Drawing on the notion of the news gap, Masullo and her co-authors wanted to find out how well the distribution of topics among Covid-19 news coverage matched audiences’ interest. Using a three-wave survey of Americans and an analysis of Facebook content posted by news organizations, they found that journalists outpaced public interest in economic and business news in the early days of the pandemic, and underplayed more practical community news, such as what people could expect at their local grocery stores, as well as fact-checking claims about the pandemic.

Masullo and her colleagues used the data to develop the concept of the “crisis coverage gap,” inspired by the news gap. These discrepancies reinforced a core principle of the crisis coverage gap: “It reinforces existing power structures by covering topics that interest elites.” Still, there were positive signs as well; the gap narrowed over the first few months of the pandemic, and news organizations matched the public’s high demand for news about death rates and particular affected groups.

“The monitorial role of crowdsourced journalism: Audience engagement in corruption reporting in nonprofit newsrooms.” By Lindita Camaj, in Journalism Practice.

Against the backdrop of today’s media environment, the citizen journalism wave of the late 2000s can seem like a naive, idealistic relic of a simpler time when the audience’s input into journalism was believed to be an uncomplicatedly desirable thing. And researchers have certainly wrestled with whether we need to rethink the concept from the ground up.

But Camaj’s study offers a refreshing example of citizen-powered journalism that makes a real democratic difference. Camaj examines Kallxo.com, a nonprofit news organization in Kosovo that relies on the thousands of citizen reports it gets each year. The site is oriented around combating corruption, and it uses these tips as the “dough starter” for all of its reporting, in the words of its editor. All tips are verified by the site’s staff and vetted by its legal department, but the organization also engages in explicit advocacy on behalf of its stories post-publication. Its staff meets regularly with anti-corruption bodies to push for accountability on the issues it’s raised.

The result is an organization that is seen as an ally by citizens, a rarity in Kosovo’s low-trust media environment. As Camaj notes, the site isn’t perfect — it tends to privilege the concerns of more elite bureaucratic leakers over its more blue-collar submissions. But it stands as a fascinating testament to the potential effectiveness of a citizen-driven news organization that combines traditional professional values with a more explicit advocate’s role within a young democracy.

“Polarized platforms? How partisanship shapes perceptions of ‘algorithmic news bias.'” By Mikhaila N. Calice et al., in New Media & Society.

Politicians’ and partisans’ complaints about the news media being biased against them have been around for about as long as the news media. But over the past couple of years, we’ve seen political discourse about media bias spill over into social media platforms, as political figures (particularly on the right) loudly protest what they see as those platforms’ algorithmic bias against their views.

The hostile media effect is a well-established, decades-old theory that explains why we’re predisposed to see media as opposing our views. In this study, Calice and her colleagues from the University of Wisconsin extend that idea to accusations of algorithmic news bias. Using an experiment with faux-complaints by Mike Pence and Joe Biden about biased algorithms, they measured the attitudes of Republicans and Democrats in response to partisan cues.

They found that the hostile media effect is very much alive when it comes to algorithms: Republicans were significantly more likely to believe that algorithms were politically biased, and that partisans on both sides were more likely to affirm that belief after reading an argument from a political figure on their own side. But they also found that Democrats were more likely than Republicans to have their beliefs shaped negatively by the views of an opposing politician. While Democrats were more responsive to partisan cues, the authors reasoned, Republicans’ views may have been more stable because algorithmic bias has already been a much more prominent subject among conservative media.

Justifying the news: The role of evidence in daily reporting. By Zvi Reich and Aviv Barnoy, in Journalism.

For many journalists, the notion that they use evidence to build their news stories feels like an obvious, common-sense element of their work. Of course — what else would we make our stories out of? But the question of whether journalists’ stories are actually predominantly built on evidence (as opposed to assertion by sources) has been an open one among journalism scholars for decades.

Reich and Barnoy used a sophisticated research design — two waves of interviews with journalists, including one asking for reporters to reconstruct the sourcing of some of their specific stories — and found a pattern of “frequent but inconsistent reliance on evidence.” Just under half of stories used some sort of evidence beyond the assertion of sources, with the most common (and most venerated) being documents and eyewitness sources. Video, audio, photos, and first-person observation were rarer and more secondary forms of evidence.

Reich and Barnoy also found that use of evidence increases where knowledge is more challenging to determine: When sources conflict factually, when covering unscheduled events, or when publication is risky. Use of evidence, they concluded, is part of an “economy of effort” through which journalists ration their reporting energies.

The (ir)relevance of audience studies in journalism education. By Jacob L. Nelson and Stephanie Edgerly, in Journalism & Mass Communication Educator.

The fact that journalism is a far more audience-centric profession than it has been in previous generations is hardly news. We’re now more than a decade into a journalistic era defined by the prevalence of audience analytics and the ability of audiences to interact with journalists and participate in the news process. But to what degree has that reality seeped into journalism education?

That’s the question Nelson (who recently published a book on journalists’ perceptions of their audiences) and Edgerly set out to answer. They studied the course titles, descriptions, and syllabi at 26 top American journalism schools, looking to see how often they were addressing news audiences, and how they were conceiving of those audiences.

They found that audiences are fairly rarely the subject of J-school courses, and when they are, the focus is heavily on the technical skills of measuring them through analytics. Faculty almost exclusively make the case for the value of these skills in professional terms — as a way for students to get jobs.

These courses, Nelson and Edgerly conclude, narrowly conceive of audiences as “digital, passive, [and] can be manipulated by media professionals with audience data savvy.” What’s missing are broader ideas about audiences as active contributors to the news media environment, as well as attention to marginalized and underserved audiences. Instead, they conclude, journalism students would benefit from learning about what analytics exclude just as much as what they reveal.

POSTED     Oct. 6, 2021, 11:11 a.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”