The impact of Sarah Maslin Nir’s “The Price of Nice Nails,” an exposé of the abuses of workers at New York nail salons, was immediate and obvious. A few days days after the story ran in The New York Times, New York state Governor Andrew Cuomo ordered emergency measures to protect workers. The Times translated the article into multiple languages (down to comments and tweets) and the Korean version at one point was the No. 3 trending story on the Times’ site, with the English-language original in the top spot.In instances like these, when an article leads to quick legislative change and pulls in unusually high traffic numbers, it’s easy to determine its impact. But most stories don’t elicit that sort of obvious response, and most news organizations don’t have the clout or resources of the Times.
Michael Keller and Brian Abelson, fellows at Columbia’s Tow Center for Digital Journalism, were thinking of those smaller news organizations when they conceived of NewsLynx. The tool aims to help newsrooms, particularly small investigative nonprofits, measure the quantitative and qualitative impact of their stories: What actually happens in the real world after the stories are published? Keller and Abelson presented a report of their work Thursday night at the Columbia Journalism School.
NewsLynx was designed around “two sets of tasks where staff found difficulties”:
1. Managing an event-tracking workflow while juggling other responsibilities.
2. Understanding what it means for a story to “do well” and what happened to cause that.
The workflow-management tool consists mainly of a section called the “Approval River,” which lets users manage content from a variety of outside sources, including Google Alerts, Twitter users, Twitter searches, Facebook pages, and Reddit searches.
A data analysis section helps users learn more about these metrics. Keller and Abelson aimed to “make it navigable for the average newsroom user and give context to numbers and events wherever possible”:
When it came to measuring impact beyond metrics, Keller and Abelson hoped to come up with some sort of taxonomy, “shared terms that could make sense for multiple organizations.” That was a challenge, because “in the news context, we face constantly shifting content types as well as desired outcomes that differ on a per-project basis. In developing our framework, instead of implementing a strict taxonomy, we intentionally left the question of what constitutes an impactful event up to the discretion of the newsroom.” They came up with a model organized around “impact tags,” categories, and levels. (This model was influenced by Chalkbeat’s MORI system and the Center for Investigative Reporting.) Here’s an example of a sample configuration:
We organized our data and presentation at the article level, which is often not the case in platforms such as Google Analytics. We also labeled our graphs and data visualizations with sentences and questions, such as “who’s sending readers here?” instead of more ascetic labels like “traffic-referrers”…We also do our own novel data collection to view article performance in the context of newsrooms’ promotional activities. For instance, we collect when a given article appeared on a site’s home page, when any main or staff Twitter accounts tweeted it out, and when it was published to the organization’s Facebook page or pages.
A “citation” is evidence that someone talked about or discussed the article; “change” is, well, change; “achievement” includes “articles that win an award, see record traffic, or are cited more than any other story”; “Other” is included “to maintain the spirit that NewsLynx is an open research platform and the framework is open to evolution.” Levels, an idea borrowed from CIR, reflects the notion that “an impactful event can happen at different scales…the most novel of these levels is Internal, which recognizes that projects can shift organizational priorities or become models for future work.”
Keller and Abelson stress that an institution has to define its goals before it can begin to measure its impact in a meaningful way. One reason they chose to focus initially on small investigative nonprofits was that “such newsrooms often look to grants or benefactors for funding, and these outside groups often require reports outlining how the organizations have used their money,” making impact measurement already a somewhat familiar concept. Newsrooms that Keller and Abelson spoke with didn’t feel well served by existing measurement tools like Google Analytics and Chartbeat. “Google Analytics feels both too complicated and not powerful enough for the questions we want to answer about readers,” one organization told them. “It doesn’t help that Google, Facebook, Twitter, Quantcast, comScore, and anything else we’ve used never agree on anything.” Over and over, Keller and Abelson heard similar sentiments:
By “not powerful enough,” users mean that they don’t help answer sophisticated questions that could bolster arguments around, for example, content strategy. Should a radio station continue putting resources into text versions of their stories for the web? Are people not scrolling all the way down the page because the headline and first three grafs were succinctly written and the reader “got it” or because the story wasn’t interesting? Or, is the website’s design — not the journalism — contributing to a high bounce rate?
Keller and Abelson spent four months developing NewsLynx, then launched it in October 2014 with about six newsrooms. They couldn’t launch it at every organization interested; often a news organization “didn’t themselves know all the pieces of their impact puzzle,” for one of two reasons: “First, immature analytics market offerings and standards for their publishing platforms and secondarily, a lack of internal consensus on what should be measured and how.” Existing analytics systems like Google’s still focus primarily on text-based articles, making them less useful for “broadcast or combined digital and broadcast organizations.” The second reason goes back to the fact that many news organizations haven’t defined their goals or decided who’s responsible for producing impact reports.
In the report’s conclusion, the authors warn against “tool-wishing” — “phrases that start with ‘if only we just had a platform to do X.'” This “can be a blinder for the real hurdles at play. No tool, no matter how well designed or implemented, can tell a news organization what impact is or should be.”
The Times’ Maslin Nir, speaking at the event Thursday night, stressed the dangers of “deciding what our stories are going to be based on projected clickable outcomes. You have to fight the mental leaderboard in order to tell the story that needs to be told.”
“The newsrooms that got the most out of NewsLynx,” Keller and Abelson write, “were those that had already started with ‘notes in a spreadsheet’ and previously worked through the harder problems of what they care about. In the end, computers are better, faster, and (sometimes) more reliable notebooks; but, just as is true in the physical world, fancy pens can’t make a writer tell a good story.”
NewsLynx is open source and its founders plan to keep it that way. “If you’re trusting your metrics to a black-box company, you never know where those metrics are actually coming from,” Abelson said, stressing the importance of understanding “how the metrics sausage is made.” The code is available now on GitHub and will be officially introduced at the SRCCON conference in Minneapolis later this month. There, “if all goes well, we’ll help technically inclined audience members actually deploy an instance of NewsLynx.”