Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Aug. 10, 2018, 9:24 a.m.
Reporting & Production

An analysis of 16,000 stories, across 100 U.S. communities, finds very little actual local news

“Sometimes a story was literally just a YouTube video that they were linking to.”

We know that local journalism is suffering. We talk about news deserts and shuttering newspapers. Research has tended to focus on individual communities, or more broadly on certain types of journalism outlets and the coverage of certain types of topics.

But what do the problems for local news look like on a broader level? Researchers from the News Measures Research Project at Duke analyzed more than 16,000 news stories across 100 U.S. communities with populations ranging from 20,000 to 300,000 people. (U.S. Census data identifies 493 such communities; the researchers chose a random sample of 100.) What they found isn’t promising:

— Only about 17 percent of the news stories provided to a community are truly local — that is actually about or having taken place within — the municipality.

— Less than half (43 percent) of the news stories provided to a community by local media outlets are original (i.e., are produced by the local media outlet).

— Just over half (56 percent) of the news stories provided to a community by local media outlets address a critical information need [categories such as “emergencies and risks,” “education,” “civic information,” etc.]

I spoke with Philip Napoli, public policy professor at Duke and the paper’s lead author, about the findings. (Other authors of “Assessing Local Journalism: News Deserts, Journalism Divides, and the Determinants of the Robustness of Local News” are Matthew Weber, Katie McCollough, and Qun Wang.) Our conversation, lightly edited and condensed for clarity, is below.

Laura Hazard Owen: Which of the findings were most surprising to you?

Philip Napoli: Because no one had tried this methodology before, I didn’t have strong expectation. But what probably surprised me the most was the low level of stories we found that fit our criteria for “local.” [From the paper: “We opted for a strict geographic definition of community, where we identified an item as about the community only if the subject was an issue/event oriented around the specific municipality.]

Owen: Fifty-six percent of the stories you looked at addressed what you defined as a critical information need. What was the other 44 percent?

Napoli: There’s a lot of reporting on Jay-Z’s latest tweet, for example. One thing we found was that even at the local media outlet level, Twitter and YouTube are fairly easy go-to sources of news. Celebrity news and information would come from various local news channels. The local reporters might write something up, or sometimes a story was literally just a YouTube video that they were linking to.

We also excluded general sports scores and things like that from our critical information needs.

Owen: You found that a community being a county seat did not correlate with an increase in local reporting.

Napoli: We went in with the assumption that if a community is also a seat for the county government, that might be the kind of thing that would lead to a greater commitment of journalistic resources or reporting, given that there’s more government activity happening there. But we didn’t find any relationship in terms of the overall number of stories or in terms of the number of stories that were original, local, or addressed a critical information need. That presence of another layer of government didn’t seem to impact any of our measures of the robustness of local journalism. I’d thought it might.

Owen: Of the 100 communities you looked at, eight generated no news stories at all. Four of those eight had no media outlets at all; they were, as you write in the paper, “news deserts in the most extreme sense.” Were there any similarities between those communities?

Napoli: We haven’t really dug into that yet, but they tended to be among the smaller ones in the sample. We didn’t go any smaller than [population of] 20,000, though, so these are not tiny communities.

The full paper is here.

Photo of newspaper in a driveway by Cory Brown used under a Creative Commons license.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Aug. 10, 2018, 9:24 a.m.
SEE MORE ON Reporting & Production
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”