Chuck Lewis didn’t mean to become the Yoda of nonprofit journalism — it just sort of happened that way. He was a reporter for decades before founding his first nonprofit, the Center for Public Integrity, in 1996, and later the Investigative Reporting Workshop at American University. Now he’s trying to answer a couple fundamental questions that will influence the future of nonprofit news: How can you judge whether a nonprofit outlet is having a positive impact? And who should do the judging?
The past decade has seen an explosion in nonprofit news organizations. The Investigative News Network, which Lewis helped found in 2009, has more than 80 members nationwide. Pew’s Project for Excellence in Journalism counts 172 nonprofit outlets in the U.S. Revenues in the for-profit world have continued to decrease, newsrooms have continued to shrink, and the coverage gap has continued to grow.
Lewis recently coauthored a paper with Hilary Niles titled “The art, science and mystery of nonprofit news assessment.” The idea was to figure out how nonprofit news organizations are measuring and reporting the impact of their work — an important question, considering that without proof of impact, funders are more likely to close their pocketbooks.
“We had never done any study about the dynamic between grantor and grantee,” says Lewis. The aim of the study, which was itself funded by the McCormick Foundation, was to figure out the discrepancies between how each understands and measures impact, and outline what their different roles should be in the funding process.
Lewis said that the funders he knows are overwhelmed when it comes to deciding how best to keep nonprofit journalism afloat. Most are very large institutions with a system in place that for many years was meant to fund projects and organizations around issues — peace, communities, democracy, poverty, health, children’s issues, the arts. But funding journalism is a different sort of project.
“I was surprised, and I wanted to make sure that other people were aware…that different media were being conflated and compared to one another,” says Niles. “The impact that different types of media organizations and journalistic organizations have can both be profound, but are not necessarily going to look alike.”
In the report, Niles and Lewis do a thorough job detailing the challenges that can makes measurement difficult — from inaccurate metrics around reach to trouble defining what investigative news even is. “Some of the foundations equate reach with impact,” says Lewis. “If it reaches a lot of eyeballs, that’s what should get a lot of money. Well, unfortunately, that leaves out about 80 nonprofits doing journalism.”
Impact isn’t just hard to measure — doing it wrong, Niles and Lewis argue, can be detrimental to the journalism being done. Which is why it’s so important to build a more useful framework of separating, in Lewis’s phrasing, the good journalism from the great journalism. Unfortunately, he says, while “it’s important to do from a management standpoint, to know what you’re doing well and understanding why this one worked and this one didn’t…for most journalists, that’s not their first inclination.”
Luckily, the overtaxed nonprofit newsroom manager have Lewis and Niles to do some of the legwork for them. One of the areas that the paper defines as a problem for nonprofit news funding is in the calendar of giving. Beat reporting with historic impact — think Watergate, or the takedown of Joseph McCarthy — can take years. “Did all of those stories have good metrics? The answer is no,” Lewis says. They also address this question directly in the paper:
A related point made by ProPublica’s Richard Tofel is that of allowing an extended period of time over which to assess the impact of journalism. Given the pace of public policy toward which much investigative work is geared, Tofel has found it worthwhile to continue following the long tail of impact even after his organization’s projects are complete. Sometimes it is years before the definitive impact can be identified.
Another, more insidious concern, is the potential for conflating advocacy and journalism when the extant systems for funding and measuring success are one and the same. “There is a little bit of a headlong rush with impact, and I think there has to be a certain discretion and understanding of the complexities involved,” says Lewis. “You can’t just apply one standard that maybe has been used substantially in one area in philanthropy and automatically think it works for journalism. And I think, deep down, foundation folks would admit that.”
By the ethical guidelines of most news nonprofits, reporters shouldn’t get into a story thinking the end goal is to change a specific piece of legislation and or end a specific political career, says Lewis. But the success of nonprofit newsrooms is being evaluated against the same standards as advocacy groups for whom very specific and intentioned policy changes are the goal. In other words, the impact of nonprofit newsrooms are being judged on a metric of success that is at odds with the basic tenets of journalism.
“Most of the conversation to date has been led by the grantors, but they’re not the ones doing the work,” says Lewis. “They’re funding it, they’re collaborating on it, but the ones who are producing it are pretty deep in the trench. It would be useful for there to be a more common understanding of what are some of the guidepost elements.”
Lewis and Niles agree that that a standard is something foundations and news organizations are going to have to work together to come up with. To that effect, they argue that nonprofits have to be disciplined about tracking their accomplishments, both anecdotally and quantitatively. When a story is reprinted or reporting is republished, when a law or policy is changed, that information should be stored as data.
“They have to make their own case for funding,” says Niles. “It’s not going to be a formulaic allocation of the available funds based on need, it’s going to be up to how well the non profits are able to make their cases to the foundations.” They write:
Count not just the number of articles or videos you publish, but also the ways your publication reverberates in your community. Do other newsrooms or newsletters pick up your bylines, or push forward the news that you break? Make note of it. Do legislators or policymakers propose changes based on the revelations you report? Follow those trajectories. Such common sense suggestions, after all, are part and parcel of reporting. Don’t take that skill set for granted. Start a master document — either for the whole newsroom or for each big story — and simply keep track of where the stories lead. In time, you’ll devise a system appropriate to your unique newsroom’s pace and style.
(The study highlights the Wisconsin Center for Investigative Journalism’s Lauren Hasler. who maps the “ripple effect” of the work being done at the Center and presented her approach and best practices at a recent INN conference.)
Other solutions have surfaced in the newsroom as well. “One of the strategies for both public reach is to partner with the large, empty, cavernous, commercial newsrooms that need content,” says Lewis. “What the scrappy nonprofit needs is eyeballs — the new ones at least. So some of this is evolving organically irrespective of these various studies.”
That said, considering the increasing complications of partnerships like this one — and experiments like the Ford Foundation’s much talked about grants to private media companies — Niles and Lewis don’t recommend that the burden of measurement rest solely with the news organizations. Says Niles: “I think it’s also worth noting for the foundations to recognize on their part, too, that the journalism organizations are primarily in the business of producing the journalism, and don’t necessarily have all the sophisticated skills or have all the available resources to measure their impact.”
Some foundations are already taking some steps in the right direction, she continues. For example, some have become more interested in tracking the accomplishments of grantees in progress, rather than reviewing only a final report, allowing for greater flexibility and more data.
Flexibility and an incremental approach are important to Lewis and Niles’ strategies around impact. “The idea of fixing a single standard that everyone’s going to adhere to — talk about herding cats,” Lewis says. “That’s not even fair to cats.”
But they do offer some ideas for a loose framework for measurement. Lewis was part of the creation of INN’s standard for transparency, which is an official prerequisite for membership in the network. If finding common ground around funding was possible, it stands to reason that some shared goals for impact might be attainable. Says INN director Kevin Davis, in an email:
Impact measurement is key to demonstrating value to society and a community. By coming up with a set of standards (even standards that approximate or can be used as barometer’s of impact ) in the near future, we believe that it will increase the ability of our member organizations to continue to grow and become an even more important part of the information fabric of their communities.
Central to those guidelines as explicated by Lewis and Niles — and in some ways in keeping with the concerns of the legacy generation of journalists — is the idea that data has its limitations. They cite a 2009 study by consultants at FSG that “cautions against attempting to rely on metrics to measure social value; that is better left for qualitative assessments and knowledgeable human interpretations of the results.” Their recommendations also highlight the growth of niche news, and the idea that reporting produced for a smaller group of individuals can still be powerful and meaningful. “In the case of investigative journalism,” they write, “a targeted rather than broad reach often is the catalyst for impact.”
They also weigh pedagogy, asking whether what the audience learns from reportage might be more important than how many of them there are. (Many news nonprofits have an explicitly educational mission, not least because of IRS rules favoring such missions over completely journalistic ones.) Funding timelines might also benefit from changes — with more frequent check-ins over the lifetime of a grant, there would be more opportunities for altering outreach and publishing strategies productively.
These guidelines are a useful step toward creating a framework around impact, and are indicative of the role that networks like the INN can play in an increasingly atomized news environment. But while the discussion is progress for nonprofits, when compared to the metrics-obsessed commercial media successes of recent history — Gawker, BuzzFeed — one wonders whether questions of reach, packaging, and data analytics might not deserve greater primacy.
Still, in the immediate sphere — one in which many of these small newsrooms are being run by people with little experience running an independent organization — guidelines for how to prove the worth of that work is an essential lifeline. With their paper, Lewis and Niles are advancing a conversation that will hopefully allow newsrooms and foundations to communicate openly and honestly about their expectations and goals. After all, at the core, they share an interest in honoring the reader by providing journalism as a public service.
“It’s not unreasonable for both grantors and grantees to have a concern, the journalist and the funder. For them to all be conscientious and earnest about trying to have the biggest effect of their journalism in the world — it’s not an unreasonable impulse,” says Lewis. “But the question is, how do you measure it?”
Image by Phil Jern used under a Creative Commons license.