Editor’s note: Longtime Nieman Lab readers know the bylines of Mark Coddington and Seth Lewis. Mark wrote the weekly This Week in Review column for us from 2010 to 2014; Seth’s written for us off and on since 2010. Together they’ve launched a monthly newsletter on recent academic research around journalism. It’s called RQ1 and we’re happy to bring each issue to you here at Nieman Lab.
It’s become increasingly clear that social media platforms aren’t particularly habitable environments for news — or at least not as habitable as they used to be, or as news organizations once thought. In some cases, that shift away from news has been distinct and explicit, as in the case of algorithmic changes in Facebook and Instagram’s algorithms, or in Elon Musk’s hostility to the news media on Twitter. In others, the indifference to news isn’t trumpeted, but it’s just as real, as recent research on TikTok has shown.
YouTube is a notable case within this context. It’s one of the most significant platforms for news consumption globally, with 20% of adults using it regularly for news, according to one international study. Its algorithm has shown some concerning (though not conclusive) tendencies to show a disproportionate amount of extremist or conspiratorial videos. But there’s been little definitive indication of whether (or how) its algorithm steers users toward or away from news in general.
In a new study in Political Communication, Shengchun Huang and Tian Yang have given us the first large-scale direct data on this question. As Huang and Yang explain, there are broadly two potential ways in which YouTube’s algorithm could potentially redirect users away from news: 1) A “topical filter bubble,” in which you watch entertainment videos and continue to get recommended more entertainment videos; and 2) “algorithmic redirection,” in which the algorithm does the opposite and recommends you something different than what you’re watching — say, an entertainment video after you’re finished watching a news video. For all the concern about social media filter bubbles (and the evidence of their strength is mixed), algorithmic redirection could be a way to counter them by allowing people to unexpectedly encounter news — if the algorithm will recommend news to people who haven’t been watching it.
But on YouTube, it turns out, both pathways tend to lead people away from news. Huang and Yang used a data set of 1.7 million of YouTube’s “Up Next” recommended videos in 2019, using automated incognito browsing to eliminate any individual watch histories. They used network analysis, mathematical modeling, and Markov chains to determine the likelihood of news videos being recommended versus other topical categories.
They found that the topical filter bubble effect was stronger for most types of entertainment videos than for news (the stickiest topic in YouTube’s recommendations: cars), and that algorithmic redirection worked much more in entertainment videos’ favor, too. In other words, if you watch an entertainment video, you’re far more likely to be recommended the same genre of video than if you watch a news video.
The result is that as a user, you might start out watching news, but you’re likely to see more and more entertainment videos pop up as recommendations until you eventually watch one of them instead. On average, the researchers wrote, an entertainment video was three times more likely to be recommended than a news video, “indicating that no matter what users start with on YouTube, they are more likely to end up watching entertainment than news videos.”
Of course, recommendation algorithms don’t determine what people watch by themselves. Users can choose which of several recommended videos to click on, or what to type into the search bar, or whether to stop watching entirely. But Huang and Yang’s study isolates the influence of YouTube’s recommendation algorithm itself.
And the picture likely doesn’t get more encouraging once you account for those human factors. Huang and Yang said that, of course, this kind of bias toward entertainment videos is rooted in an economic logic for the platform built around increasing engagement. And though their study didn’t directly address this, it’s quite likely that that logic is in turn rooted in human behavior: People simply aren’t as interested in news on social media as we would like to hope. But as this study and others like it on other platforms have shown, YouTube’s algorithm is designed to press fairly hard on the scale against news.
“News participation is declining: Evidence from 46 countries between 2015 and 2022.” By Sacha Altay, Richard Fletcher, and Rasmus Kleis Nielsen, in New Media & Society. Lo these two decades ago, the emergence of Web 2.0 platforms (like YouTube, above) heralded an era when people could participate with online information — including news-y information — in a whole host of new and exciting ways: seemingly frictionless sharing, a new-fangled thing called “liking” to signal recommendation, and the ability to add user comments to news stories. What wasn’t there to, well, like?
We don’t have to tell you how that story turned out, with social media contributing to a whole host of ills that need no recounting here. And yet an important question remains: If people engaging with news is, in aggregate, a rather good thing, contributing to collective knowledge about public affairs, then what is happening with such participation in a current media moment still dominated by digital platforms?
Altay and colleagues offer an important answer based on a massive dataset: surveys from 2015 to 2022 across 46 countries, capturing responses from nearly 600,000 people. Unlike other studies suggesting that digital media have broadened news engagement, this research tells a different story, finding an overall 12% decline in participation. Plus, the proportion of respondents not participating with news at all increased by 19% during the same period. “This decrease is observed in most countries and for most forms of participation, including liking, sharing, commenting on news on social media and talking about the news offline,” the authors note.
Some types of news participation fell substantially between 2015 and 2022. For example, sharing news on social media dropped by 29%, commenting on news by 26%, and face-to-face discussions about news by 24%. Those are staggering sums in just seven years. Conversely, the only form of participation that has consistently gone up during that time? Sharing news through private messaging apps like WhatsApp, which has increased by 20%.
What’s going on? The authors speculate that the decline in news participation may be due to weakening “opportunity structures.” For example, many news websites have restricted online commenting, naturally leading to fewer comments on these sites. And some social media platforms like Facebook have deprioritized news content in favor of posts from peers, celebrities, and influencers, which translates to less engagement with news.
And yet, the authors are quick to note, that doesn’t explain why people seem to be talking less about the news in face-to-face settings, or why they share fewer news items via email. “An additional reason could be a general sense of fatigue around social interactions about news, either because the political climate is becoming increasingly hostile (the political polarization hypothesis), or because the news is increasingly negative and expected to bring down one’s mood.”
“Online newspaper subscriptions: Using machine learning to reduce and understand customer churn.” By Lúcia Madeira Belchior, Nuno António and Elizabeth Fernandes, in Journal of Media Business Studies. As newspapers went online, they initially chased advertising revenue, a natural extension of where the bulk of their print revenues had come from. But when that business model soured online (all those ad blockers didn’t help), they shifted to pursuing reader revenue — and thus most newspapers have paywalls and subscriptions. But such news sites can also have a high rate of churn (it’s part of the subscription fatigue that many of us feel), with one recent analysis warning that U.S. newspapers are seeing an “ominous drop in reader retention.”
What should publishers do to reduce churn? This study took up that question by analyzing data from PÚBLICO, a leading Portuguese newspaper with national reach, a trusted brand, and a robust level of digital subscriptions. The study sought to identify subscribers likely to churn and understand the primary factors driving loyalty, retention, and churn, with the goal of helping marketing teams to better retain at-risk subscribers. Two machine-learning models were built and tested, one involving all subscriptions and the other containing only non-recurring subscriptions. Models were evaluated using A/B tests in different periods, applying different retention strategies.
Some key takeaways: Calling customers who are likely to cancel their subscriptions is considerably more effective in keeping them than sending a newsletter or doing nothing. Personalized phone calls significantly increase the chances of subscribers renewing — though obviously that requires more human resources to accomplish. By using churn prediction models, publishers can identify subscribers at high risk of canceling and take targeted actions to retain them, saving money and improving retention.
Also, although product usage wasn’t the most critical churn predictor, higher usage correlates with lower churn, which reinforces the need for publishing engaging content to maintain reader interest. Additionally, they warned that over-covering a topic can lead to “news fatigue,” causing people to avoid news. Newspapers should consider this to prevent high churn rates.
Ultimately, the study suggests that A/B testing can help newspapers evaluate the effectiveness of retention strategies, which drives home the importance of maintaining up-to-date subscriber records for accurate analysis and better decision-making.
“Audience evaluations of news videos made with various levels of automation: A population-based survey experiment.” By Neil Thurman, Sally Stares, and Michael Koliska, in Journalism. Automated journalism has been around for quite a while now, at least since people began fretting about so-called “robot journalism” in the early 2010s. But what had once been mostly at the margins of journalism — say, some automated AP stories here and there — is now moving closer and closer to the center.
As this study puts it plainly, “The use of automation in journalism is encroaching more and more on what many would consider to be journalists’ core professional practices, such as the identification of story leads, verification, and decisions about which stories are shown, and with what prominence.” And it’s not just happening with text. Increasingly, news organizations such as BBC, Reuters, and The Economist are now producing automated videos with the help of companies such as Wibbitz, Wochit, and Synthesia.
While there has been a lot of research about automated stories in text form, there has been little work examining automated news videos. Thurman and colleagues address that with an online survey experiment that explores how a socio-demographically representative sample of 4,200 online news consumers in the U.K. perceived “human-made, partly automated, and highly automated short-form online news videos” on 14 story topics.
They found that, on average, human-made videos were rated more favorably on certain evaluation criteria, but the differences were not substantial. And for journalists, they should take note that partly automated news videos, ones that had automation followed by human editing, were well-received by audiences.
The researchers pointed to four key characteristics for consideration: “(1) matching videos’ textual content—in our case the captions—to its visual context and the importance of the (2) relevance, (3) quality, and (4) variety of the images included in the videos.” They found that automated videos were rated significantly worse than the fully human-made ones in these four areas. And yet, they also found that post-editing of automated videos seemed to resolve at least one of those issues — suggesting that a blend of automation on the frontend and human on the backend could be worth exploring further.
“What news is shared where and how: A multi-platform analysis of news shared during the 2022 U.S. Midterm elections.” By Christine Sowa Lepird, Lynnette Hui Xian Ng, Anna Wu, and Kathleen M. Carley, in Social Media + Society. Earlier we talked about the question of news participation on social media: Even if news engagement (likes, shares, comments, etc.) seems to be declining overall, social media remain a vital venue for the sharing and discussion of news. But news is no single thing, and what gets shared under the banner of “news” can take at least one of four forms, as this study suggests: (1) Real News, which refers to credible news, (2) Local News, which is reliable news aimed at specific geographic areas, (3) Low Credibility News, which consists of misleading content, and (4) Pink Slime News, which is low-credibility, often partisan fare masquerading as a local publication in a bid to gain relevance and attention from potential readers.
The question becomes: Which of those four types of news gets shared most often on social media, and with what perceived impact?
The researchers analyzed more than 1.3 million posts across three social platforms (Facebook, Twitter, Reddit) connected to the 2022 U.S. midterms, looking at a story’s engagement (“defined as the ratio of number of likes a post has vs. the number of followers of the page”).
Real News, they found, received the least engagement on a relative basis, whereas Pink Slime received the most. And they discovered that bot users tend to share a larger proportion of Low Credibility News and Pink Slime compared to human users who generally shared more content relating to local communities.
Among other things, the authors note some platform-specific differences: “Our results show that across platforms, Reddit leads with the highest proportion of Real News, while Facebook leads with the highest proportion of Local News sites, and Twitter has the highest proportion of Pink Slime and Low Credibility News sites.”
“Alternative media vary between mild distortion and extreme misinformation: Steps toward a typology.” By Anna Staender, Edda Humprecht, and Frank Esser, in Digital Journalism.
“Alternative epistemologies as distinguishing features of right-wing and left-wing media in the United States.” By Mark Coddington and Logan Molyneux, in Digital Journalism.1 We close by looking at two studies about alternative media, which are generally defined as media that situate themselves in opposition to mainstream media — like a Breitbart or Gateway Pundit kind of site. While the exact categorization of “alternative” can be a bit murky, the questions such news outlets pose are significant and enduring, such as these ones: What does “knowledge” look like for alternative media? How often do alternative media contribute, let’s say, to the stew of misinformation online? And how do they handle things like sourcing and attribution to verify their claims in journalistic ways?
The first study, by Staender and colleagues, involved examining 1,661 Facebook posts from 25 popular alternative media outlets during the pandemic in five countries (France, Germany, Switzerland, the U.K., and the U.S.). The research team categorized everything from “mild misleading content” to “blatant misinformation.”
They were able to identify four types of reporting in alternative media: “light distortion,” “heavy distortion,” “ideological misinformation,” and “extreme misinformation.” The first two (light and heavy distortion) were most apparent in content produced by popular alternative media sources, while the latter two had smaller but engaged audiences.
Their main conclusion: “The alternative media that are more successful among Facebook users…overwhelmingly choose not to adopt an editorial profile that focuses on extreme misinformation” — which, the authors suggest, “may be the basis of their success.”
The second study, which was led by Mark, looked at differences between mainstream and alternative media when it comes to “the structure of knowledge in news texts,” or the evidence provided to substantiate fact-oriented claims made in the text, and how that evidence is referenced. Specifically, the researchers wanted to know whether the forms of evidence included (or left out) in news texts seemed to hint at different knowledge-making practices used by right-wing and left-wing alternative media.
The resulting analysis of thousands of sources across nearly 600 news stories in mainstream and alternative media suggest that, yes, there are key differences both in the types of evidence these publications provide and the ways they put forward this evidence to their readers.
Overall, they find, news stories in alternative media offer weaker evidence compared with their mainstream competitors. “Alternative media present fewer sources on average and rely more often on unattributed assertions. The source count is particularly low in right-wing alternative media,” they wrote. What’s more, alternative outlets tend to be more “distant” — that is, further away in the chain of firsthand, secondhand, or thirdhand information — from the sources they do offer, which might speak to their lack of access to news sources compared to mainstream counterparts.
Additionally, when comparing left- and right-wing alternative media on their evidentiary practices, they found that the “amount of evidence in right-wing alternative outlets is substantially lower than in either left-wing alternative or mainstream media,” which could support the idea that conservative alternative media rely less on “evidence-based forms of knowing.”