Nieman Foundation at Harvard
HOME
          
LATEST STORY
A year in, The Guardian’s European edition contributes 15% of the publisher’s pageviews
ABOUT                    SUBSCRIBE
April 26, 2021, 12:45 p.m.

No explaining allowed! A new journal promises just-the-facts description, not theory or causality

“First and foremost, we respond to an undersupply of quantitative descriptive research in social science. Causal research that asks the question why has largely taken the place of descriptive research that asks the question what.”

A major trend in digital journalism over the past decade has been the rise of the explainer: the let’s-step-back article or infographic-packed video that takes a big issue in the headlines and, well, tries to explain it. Vox built an entire editorial model around it.

But on the flip side, a very common complaint about the media (particularly from those on the political right) is that reporters spend too much time decoding intentions, describing trends, and deriving meaning — and not enough on reporting. just. the. facts.

There’s a similar debate in academia. How much should researchers invest in answering what versus why and how? Will your work be better if it investigates a hypothesis that might explain a phenomenon? Or would it be more useful to make your goal simply to describe that phenomenon?

In the field of media research, those on Team Describe got a valuable new ally today: a new publication called the Journal of Quantitative Description: Digital Media. Its co-founders are Princeton’s Andy Guess, the University of Zurich’s Eszter Hargittai, and Penn State’s Kevin Munger, all of whom work on issues in and around journalism. Here’s their explanation of their no-explaining model:

We would not be undertaking this endeavor if we thought our journal would simply add to the accumulating stock of existing scholarly venues, mirroring its structure and pathologies through some inescapable process of institutional isomorphism. On the contrary, our hope is that this intervention into the social science journal publishing space pushes the boundaries of the feasible along multiple dimensions methodological, disciplinary, and financial.

We are here to address some of the failures in the existing structure of publishing outlets, particularly those that cater to quantitative social science researchers. Such failures are many:

1. Trending away from “mere” description. There are macro trends in social science that affect all journals. Many of these trends are good; we applaud the growing attention to causality, for example, and to concerns about generalizability that drive attention to sample composition. But as we describe below, these trends come at a cost to quantitative work that can provide a descriptive foundation for research agendas.

2. Lack of clear standards for substantive importance. The topics that are deemed important too often reflect path dependence, the biases of established scholars and institutions, approved theoretical frameworks from the dominant canon, and the focus of media interest. The whiplash of the past few years of digital media research, the attention paid first to “echo chambers,” then to “fake news,” now to “radicalization,’ is inimical to the accumulation of knowledge. All of these topics are worth studying, but we need a more stable metric for “topical importance” than media attention.

3. Adherence to disciplinary and geographic boundaries. Most peer journals are explicitly connected to a single discipline, and all of them are overly concerned with the United States and Western Europe. The topic of digital media is of obvious importance to the entire world.

4. Artificial constraints. Most journals have strict requirements for the length and format of what they publish, making it difficult to find outlets for important contributions of modest scope or idiosyncratic topic. (How many of us have written 8,000-word papers around one interesting finding, or have shelved neat findings because we did not feel like writing an 8,000-word paper around them?)

5. Inefficiencies of peer review. Most will agree that the current mode of journal reviewing is suboptimal. Too many authors wait months only to be told that their submission has been desk rejected; at the same time, too many scholars receive an endless stream of reviewing requests.

Their new journal is meant to address these issues. It has no preset limits on length; it sets its field as “digital media, broadly construed” rather than one of the many disciplinary niches within it; its acquisition process aims to reduce the number of papers that go out for peer review (and increase the share of them that get published). And it’s only interested in “quantitative description…a mode of social-scientific inquiry [that] can be applied to any substantive domain.”

Also, it’s all open access, and it doesn’t require a publishing fee (at least not now).

JQD:DM is interesting both as a concept and as a container for interesting work. The first issue, out today, is probably packed with more papers I’d be interested in reading than an academic journal has had for a long time. A few of the highlights:

Cracking Open the News Feed: Exploring What U.S. Facebook Users See and Share with Large-Scale Platform Data, by Andy Guess, Kevin Aslett, Joshua Tucker, Richard Bonneau, and Jonathan Nagler

In this study, we analyze for the first time newly available engagement data covering millions of web links shared on Facebook to describe how and by which categories of U.S. users different types of news are seen and shared on the platform. We focus on articles from low-credibility news publishers, credible news sources, purveyors of clickbait, and news specifically about politics, which we identify through a combination of curated lists and supervised classifiers.

Our results support recent findings that more fake news is shared by older users and conservatives and that both viewing and sharing patterns suggest a preference for ideologically congenial misinformation. We also find that fake news articles related to politics are more popular among older Americans than other types, while the youngest users share relatively more articles with clickbait headlines.

Across the platform, however, articles from credible news sources are shared over 5 times more often and viewed over 7 times more often than articles from low-credibility sources. These findings offer important context for researchers studying the spread and consumption of information — including misinformation — on social media.

Value for Correction: Documenting Perceptions about Peer Correction of Misinformation on Social Media in the Context of COVID-19, by Leticia Bode and Emily K. Vraga

Although correction is often suggested as a tool against misinformation, and empirical research suggests it can be an effective one, we know little about how people perceive the act of correcting people on social media.

This study measures such perceptions in the context of the onset of the COVID-19 pandemic in 2020, introducing the concept of value for correction. We find that value for correction on social media is relatively strong and widespread, with no differences by partisanship or gender. Neither those who engage in correction themselves nor those witnessing the correction of others have higher value for correction.

Witnessing correction, on the other hand, is associated with lower concerns about negative consequences of correction, whereas engaging in correction is not.

An Analysis of the Partnership between Retailers and Low-credibility News Publishers, by Lia Bozarth and Ceren Budak

In this paper, we provide a large-scale analysis of the display ad ecosystem that supports low-credibility and traditional news sites, with a particular focus on the relationship between retailers and news producers. We study this relationship from both the retailer and news producer perspectives.

First, focusing on the retailers, our work reveals high-profile retailers that are frequently advertised on low-credibility news sites, including those that are more likely to be advertised on low-credibility news sites than traditional news sites. Additionally, despite high-profile retailers having more resources and incentive to dissociate with low-credibility news publishers, we surprisingly do not observe a strong relationship between retailer popularity and advertising intensity on low-credibility news sites. We also do not observe a significant difference across different market sectors.

Second, turning to the publishers, we characterize how different retailers are contributing to the ad revenue stream of low-credibility news sites. We observe that retailers who are among the top-10K websites on the Internet account for a quarter of all ad traffic on low-credibility news sites.

Nevertheless, we show that low-credibility news sites are already becoming less reliant on popular retailers over time, highlighting the dynamic nature of the low-credibility news ad ecosystem.

Generous Attitudes and Online Participation, by Floor Fiers, Aaron Shaw, and Eszter Hargittai

Some of the most popular websites depend on user-generated content produced and aggregated by unpaid volunteers. Contributing in such ways constitutes a type of generous behavior, as it costs time and energy while benefiting others.

This study examines the relationship between contributions to a variety of online information resources and an experimental measure of generosity, the dictator game. Results suggest that contributors to any type of online content tend to donate more in the dictator game than those who do not contribute at all.

When disaggregating by type of contribution, we find that those who write reviews, upload public videos, write or answer questions, and contribute to encyclopedic collections online are more generous in the dictator game than their non-contributing counterparts. These findings suggest that generous attitudes help to explain variation in contributions to review, question-and-answer, video, and encyclopedic websites.

Characterizing Online Media on COVID-19 during the Early Months of the Pandemic, by Henry Dambanemuya, Haomin Lin, and Ágnes Horvát

The 2019 coronavirus disease had wide-ranging effects on public health throughout the world. Vital in managing its spread was effective communication about public health guidelines such as social distancing and sheltering in place. Our study provides a descriptive analysis of online information sharing about coronavirus-related topics in 5.2 million English-language news articles, blog posts, and discussion forum entries shared in 197 countries during the early months of the pandemic.

We illustrate potential approaches to analyze the data while emphasizing how often-overlooked dimensions of the online media environment play a crucial role in the observed information-sharing patterns. In particular, we show how the following three dimensions matter: (1) online media posts’ geographic location in relation to local exposure to the virus; (2) the platforms and types of media chosen for discussing various topics; and (3) temporal variations in information-sharing patterns.

Our descriptive analyses of the multimedia data suggest that studies that overlook these crucial aspects of online media may arrive at misleading conclusions about the observed information-sharing patterns. This could impact the success of potential communication strategies devised based on data from online media. Our work has broad implications for the study and design of computational approaches for characterizing large-scale information dissemination during pandemics and beyond.

Information Seeking Patterns and COVID-19 in the United States by Bianca Reisdorf, Grant Blank, Johannes Bauer, Shelia Cotten, Craig Robertson, and Megan Knittel

In this paper, we describe how socioeconomic background and political leaning are related to how U.S. residents look for information on COVID-19.

Using representative survey data from 2,280 U.S. internet users, collected in fall 2020, we examine how factors, such as age, gender, race, income, education, political leaning, and internet skills are related to how many different types of sources and what types of sources respondents use to find information on COVID-19. Moreover, we describe how many checking actions individuals use to verify information, and how all of these factors are related to knowledge about COVID-19.

Results show that men, those with higher education, higher incomes, and higher self-perceived internet ability, and those who are younger used more types of information sources. Similar patterns emerged for checking actions.

When we examined different types of sources (mainstream media, conservative sources, medical sources, and TV sources), three patterns emerged: 1) respondents who have more resources used more types of sources; 2) demographic factors made less difference for conservative media consumers; and 3) conservative media were the only type of source used less by younger age groups than older age groups.

Finally, availability of resources and types of information sources were related to differences in factual knowledge. Respondents who had fewer resources, those who used conservative news media, and those who engaged in more checking actions got fewer answers right. This difference could lead to information divides and associated knowledge gaps in the United States regarding the coronavirus pandemic.

All interesting stuff, and there’s more of it.

But it’s also a fun thought experiment to consider what the approach of JQD:DM would look like if it was being used in the world of journalism rather than academia. I’ve always believed that people who want “just the facts” from news outlets wouldn’t actually like it if media companies moved in that direction. (Wanting “just the facts” is often just a cultural signal for conservatism. Trump supporters were far more likely to say they want “just the facts” than Clinton supporters in 2016; a strict allegiance to fact-based reality was not a hallmark of the Trump administration.)

On the other hand, I know there are a thousand things I’d love to write about — that I think would be interesting information that would make the world an ever-so-slightly better place — but which don’t have a particular analytical hook attached to them. “This is some really interesting data I discovered/gathered/generated” is more likely to lead to posting a dataset on GitHub than writing a story for a news site.

I don’t think abandoning analysis and explanation makes any sense for news organizations — especially at a time when subscriber-based business models make the delivery of benefits/service more key to revenue streams than it was decades ago. But I do wish there were more spaces for “quantitative description” in journalism.

I think of Jeremy Singer-Vine’s email newsletter Data is Plural, which highlights “useful/curious” datasets. I think of The Pudding, which “explains ideas debated in culture with visual essays,” but does just as much to be a platform for compelling quantitative data. And I think of some of The New York Times’ best interactives, like its ridiculously popular 2013 dialect map, which are more like UIs for datasets than “stories” or “explainers.” People like this stuff! Let’s do more of it.

Photo by Mika Baumeister.

Joshua Benton is the senior writer and former director of Nieman Lab. You can reach him via email (joshua_benton@harvard.edu) or Twitter DM (@jbenton).
POSTED     April 26, 2021, 12:45 p.m.
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
A year in, The Guardian’s European edition contributes 15% of the publisher’s pageviews
After the launch of Guardian Europe, one-time donations from European readers increased by 45%.
Press Forward awards $20 million to 205 small local newsrooms
In response to the volume and quality of applications, Press Forward doubled the funding and number of grantees for this open call.
Midwestern news nonprofit The Beacon shuts down its Wichita newsroom
“We’ve realized that we can’t do it all, and have made the decision to no longer have a staffed newsroom in Wichita.”