Twitter  A look at the state of local public media via Localore nie.mn/1ipr7OP  
Nieman Journalism Lab
Pushing to the future of journalism — A project of the Nieman Foundation at Harvard
impact-newton-cradle-balls

Metrics, metrics everywhere: How do we measure the impact of journalism?

We need to get beyond counting pageviews and ad impressions and build better ways of judging how our work changes the world around us.
Email

If democracy would be poorer without journalism, then journalism must have some effect. Can we measure those effects in some way? While most news organizations already watch the numbers that translate into money (such as audience size and pageviews), the profession is just beginning to consider metrics for the real value of its work.

That’s why the recent announcement of a Knight-Mozilla Fellowship at The New York Times on “finding the right metric for news” is an exciting moment. A major newsroom is publicly asking the question: How do we measure the impact of our work? Not the economic value, but the democratic value. The Times’ Aaron Pilhofer writes:

The metrics newsrooms have traditionally used tended to be fairly imprecise: Did a law change? Did the bad guy go to jail? Were dangers revealed? Were lives saved? Or least significant of all, did it win an award?

But the math changes in the digital environment. We are awash in metrics, and we have the ability to engage with readers at scale in ways that would have been impossible (or impossibly expensive) in an analog world.

The problem now is figuring out which data to pay attention to and which to ignore.

Evaluating the impact of journalism is a maddeningly difficult task. To begin with, there’s no single definition of what journalism is. It’s also very hard to track what happens to a story once it is released into the wild, and even harder to know for sure if any particular change was really caused by that story. It may not even be possible to find a quantifiable something to count, because each story might be its own special case. But it’s almost certainly possible to do better than nothing.

The idea of tracking the effects of journalism is old, beginning in discussions of the newly professionalized press in the early 20th century and flowering in the “agenda-setting” research of the 1970s. What is new is the possibility of cheap, widespread, data-driven analysis down to the level of the individual user and story, and the idea of using this data for managing a newsroom. The challenge, as Pilhofer put it so well, is figuring out which data, and how a newsroom could use that data in a meaningful way.

What are we trying to measure and why?

Metrics are powerful tools for insight and decision-making. But they are not ends in themselves because they will never exactly represent what is important. That’s why the first step in choosing metrics is to articulate what you want to measure, regardless of whether or not there’s an easy way to measure it. Choosing metrics poorly, or misunderstanding their limitations, can make things worse. Metrics are just proxies for our real goals — sometimes quite poor proxies.

An analytics product such as Chartbeat produces reams of data: pageviews, unique users, and more. News organizations reliant on advertising or user subscriptions must pay attention to these numbers because they’re tied to revenue — but it’s less clear how they might be relevant editorially.

Consider pageviews. That single number is a combination of many causes and effects: promotional success, headline clickability, viral spread, audience demand for the information, and finally, the number of people who might be slightly better informed after viewing a story. Each of these components might be used to make better editorial choices — such as increasing promotion of an important story, choosing what to report on next, or evaluating whether a story really changed anything. But it can be hard to disentangle the factors. The number of times a story is viewed is a complex, mixed signal.

It’s also possible to try to get at impact through “engagement” metrics, perhaps derived from social media data such as the number of times a story is shared. Josh Stearns has a good summary of recent reports on measuring engagement. But though it’s certainly related, engagement isn’t the same as impact. Again, the question comes down to: Why would we want to see this number increase? What would it say about the ultimate effects of your journalism on the world?

As a profession, journalism rarely considers its impact directly. There’s a good recent exception: a series of public media “impact summits” held in 2010, which identified five key needs for journalistic impact measurement. The last of these needs nails the problem with almost all existing analytics tools:

While many Summit attendees are using commercial tools and services to track reach, engagement and relevance, the usefulness of these tools in this arena is limited by their focus on delivering audiences to advertisers. Public interest media makers want to know how users are applying news and information in their personal and civic lives, not just whether they’re purchasing something as a result of exposure to a product.

Or as Ethan Zuckerman puts it in his own smart post on metrics and civic impact, ”measuring how many people read a story is something any web administrator should be able to do. Audience doesn’t necessarily equal impact.” Not only that, but it might not always be the case that a larger audience is better. For some stories, getting them in front of particular people at particular times might be more important.

Measuring audience knowledge

Pre-Internet, there was usually no way to know what happened to a story after it was published, and the question seems to have been mostly ignored for a very long time. Asking about impact gets us to the idea that the journalistic task might not be complete until a story changes something in the thoughts or actions of the user.

If journalism is supposed to inform, then one simple impact metric would ask: Does the audience know the things that are in this story? This is an answerable question. A survey during the 2010 U.S. mid-term elections showed that a large fraction of voters were misinformed about basic issues, such as expert consensus on climate change or the predicted costs of the recently passed healthcare bill. Though coverage of the study focused on the fact that Fox News viewers scored worse than others, that missed the point: No news source came out particularly well.

In one of the most limited, narrow senses of what journalism is supposed to do — inform voters about key election issues — American journalism failed in 2010. Or perhaps it actually did better than in 2008 — without comparable metrics, we’ll never know.

While newsrooms typically see themselves in the business of story creation, an organization committed to informing, not just publishing, would have to operate somewhat differently. Having an audience means having the ability to direct attention, and an editor might choose to continue to direct attention to something important even it’s “old news”; if someone doesn’t know it, it’s still new news to them. Journalists will also have to understand how and when people change their beliefs, because information doesn’t necessarily change minds.

I’m not arguing that every news organization should get into the business of monitoring the state of public knowledge. This is only one of many possible ways to define impact; it might only make sense for certain stories, and to do it routinely we’d need good and cheap substitutes for large public surveys. But I find it instructive to work through what would be required. The point is to define journalistic success based on what the user does, not the publisher.

Other fields have impact metrics too

Measuring impact is hard. The ultimate effects on belief and action will mostly be invisible to the newsroom, and so tangled in the web of society that it will be impossible to say for sure that it was journalism that caused any particular effect. But neither is the situation hopeless, because we really can learn things from the numbers we can get. Several other fields have been grappling with the tricky problems of diverse, indirect, not-necessarily-quantifiable impact for quite some time.

Academics wish to know the effect of their publications, just as journalists do, and the academic publishing field has long had metrics such citation count and journal impact factor. But the Internet has upset the traditional scheme of things, leading to attempts to formulate wider ranging, web-inclusive measures of impact such as Altmetrics or the article-level metrics of the Public Library of Science. Both combine a variety of data, including social media.

Social science researchers are interested not only in the academic influence of their work, but its effects on policy and practice. They face many of the same difficulties as journalists do in evaluating their work: unobservable effects, long timelines, complicated causality. Helpfully, lots of smart people have been working on the problem of understanding when social research changes social reality. Recent work includes the payback framework which looks at benefits from every stage in the lifecycle of research, from intangibles such as increasing the human store of knowledge, to concrete changes in what users do after they’ve been informed.

NGOs and philanthropic organizations of all types also use effectiveness metrics, from soup kitchens to international aid. A research project at Stanford University is looking at the use and diversity of metrics in this sector. We are also seeing new types of ventures designed to produce both social change and financial return, such as social impact bonds. The payout on a social impact bond is contractually tied to an impact metric, sometimes measured as a “social return on investment.”

Data beyond numbers

Counting the countable because the countable can be easily counted renders impact illegitimate.

- John Brewer, “The impact of impact

Numbers are helpful because they allow standard comparisons and comparative experiments. (Did writing that explainer increase the demand for the spot stories? Did investigating how the zoning issue is tied to developer profits spark a social media conversation?) Numbers can be also compared at different times, which gives us a way to tell if we’re doing better or worse than before, and by how much. Dividing impact by cost gives measures of efficiency, which can lead to better use of journalistic resources.

But not everything can be counted. Some events are just too rare to provide reliable comparisons — how many times last month did your newsroom get a corrupt official fired? Some effects are maddeningly hard to pin down, such as “increased awareness” or “political pressure.” And very often, attributing cause is hopeless. Did a company change its tune because of an informed and vocal public, or did an internal report influence key decision makers?

Fortunately, not all data is numbers. Do you think that story contributed to better legislation? Write a note explaining why! Did you get a flood of positive comments on a particular article? Save them! Not every effect needs to be expressed in numbers, and a variety of fields are coming to the conclusion that narrative descriptions are equally valuable. This is still data, but it’s qualitative (stories) instead of quantitative (numbers). It includes comments, reactions, repercussions, later developments on the story, unique events, related interviews, and many other things that are potentially significant but not easily categorizable. The important thing is to collect this information reliably and systematically, or you won’t be able to make comparisons in the future. (My fellow geeks may here be interested in the various flavors of qualitative data analysis.)

Qualitative data is particularly important when you’re not quite sure what you should be looking for. With the right kind, you can start to look for the patterns that might tell you what you should be counting,

Metrics for better journalism

Can the use of metrics make journalism better? If we can find metrics that show us when “better” happens, then yes, almost by definition. But in truth we know almost nothing about how to do this.

The first challenge may be a shift in thinking, as measuring the effect of journalism is a radical idea. The dominant professional ethos has often been uncomfortable with the idea of having any effect at all, fearing “advocacy” or “activism.” While it’s sometimes relevant to ask about the political choices in an act of journalism, the idea of complete neutrality is a blatant contradiction if journalism is important to democracy. Then there is the assumption, long invisible, that news organizations have done their job when a story is published. That stops far short of the user, and confuses output with effect.

The practical challenges are equally daunting. Some data, like web analytics, is easy to collect but doesn’t necessarily coincide with what a news organization ultimately values. And some things can’t really be counted. But they can still be considered. Ideally, a newsroom would have an integrated database connecting each story to both quantitative and qualitative indicators of impact: notes on what happened after the story was published, plus automatically collected analytics, comments, inbound links, social media discussion, and other reactions. With that sort of extensive data set, we stand a chance of figuring out not only what the journalism did, but how best to evaluate it in the future. But nothing so elaborate is necessary to get started. Every newsroom has some sort of content analytics, and qualitative effects can be tracked with nothing more than notes in a spreadsheet.

Most importantly, we need to keep asking: Why are we doing this? Sometimes, as I pass someone on the street, I ask myself if the work I am doing will ever have any effect on their life — and if so, what? It’s impossible to evaluate impact if you don’t know what you want to accomplish.

                                   
What to read next
joseph-pulitzer
Mark Coddington    April 18, 2014
Plus: The pushback against Vox and The Intercept, Twitter’s data buy, and the rest of this week’s news and tech must-reads.
  • http://twitter.com/gavinsblog Gavin Sheridan

    I think your last question is important: Why are we doing this? Perhaps it’s something many journalists forget. Why did we choose to be journalists? (most likely not for the money!). 

    I often think of journalists as akin to nominated officials. 

    While I as a citizen am busy going about my life, bringing up kids, paying  a mortgage and voting, I need to pick someone, along with my fellow citizens, to  ’watch’ for things on my behalf – but not from within the political system. It’s a bit like the watchdog role, but it’s also picking someone to keep me informed (locally about things that affect me directly, and internationally indirectly, or with particular topics I am interested in), because I have to get on with the business of working and living. 

    Whatever about digital, print, TV or what not – there always will be a demand for someone I *trust* to act on my behalf to filter, parse and manage information for me, and keep me up to speed and also to keep an eye on the powers that be – it allows the general public to get on with their lives in the knowledge that someone is looking after the interests of the broader community.

    Measuring impact of that work is important – and there’s lots of potential ways that impact could be measured. Direct relationships between journalists and their readers is a quite a new concept, and journalists being directly accountable to their readers is too. There is is huge scope for the public ‘valuing’ a particular piece of work – and online mechanisms for the first time allow us to make a stab at it. Some models have tried to directly correlate work or value with money/donations, but this might not always be the case. Certainly worth thinking about this more. 

  • http://www.facebook.com/people/Carl-Bennett/789376561 Carl Bennett

    “Pre-Internet, there was usually no way to know what happened to a story after it was published.” Really? There were no surveys, no polls, no conversation, no listening to what people were talking about in bars and cafes and station platforms? This isn’t even journalism, it’s self-obsession. The question of which metrics to use depends entirely on what you want to do next. If what you want to do is write stories the market will like then the 2010 survey data shows that Fox and the Murdoch press in general don’t “misinform”, they actively manage data to manage a newsroom. Fox is already “using that data in a meaningful way.” Fox has an agenda and writes stories to suit that agenda. If people end-up misinformed, Fox’s view is ‘so what?’
    Across all research practices (I’m not going to call them disciplines any more because most practitioners are simply not trained, not disciplined and are driven purely by cost and expediency) there is a constant obsession with “cheap, widespread data-driven analysis” regardless of the use it’s going to be put to. If it’s cheap it’s automatically good. If it’s easy to use it’s infinitely better than anything that gives usable, scaleable results. Ask Survey Monkey.

    The math did not change before or after Tim Berners-Lee. This is lazy, disconnected undergraduate thinking at its most self-obsessed. Data is being used by people who do not know what it is for to justify their own preconceptions and any concept of “facts” goes out of the window. If that’s the kind of journalism you want to be part of, you’re welcome to it.

  • http://twitter.com/jonathanstray jonathanstray

    I believe you’re suggesting that Fox has used available data towards market ends. That’s not what I’m talking about in this post. In fact I’m specifically not talking about that, but about seeing if we can get a handle on the civic, social, or democratic value of journalism.

    I do think the web changes things in terms of data. You couldn’t even tell how many people read a story when the story was on paper. Now the number of times a story is read is a trivial metric — and, as I described, not the right metric for editorial purposes.

    But I will take your point that there were ways to get some of these effects before the internet. There are just many more ways now, and it’s often much cheaper to get equivalent data.

    And as you point out, data is useless if you just want it to justify what you already believe. That is not what I’m suggesting, which is why I warn several times against misinterpretation of metrics, or becoming too enamored with numbers at the expense of other types of information.

  • Bjw_68

    The Associated Press undertook a qualitative study in 2008 called “A New Model For News”. It was an ethnographic study of young news consumers and their habits for accessing news information in the digital realm. What it found was that their study group of participants, from a number of global locations found the news to have too little context and depth. The study was meant to inform their reworking of AP’s digital presence on the web. While the focus of the subject was different than what your post is suggesting needs to be examined, the success of the research supports the idea of qualitative methods bringing a new understanding of the impact of news stories.

  • Alexander Mikhailian

    I, for one, 

  • Cytosavant

    Replace the word metrics, with the word manipulation and you see what “they” want. 
    Sorry Gavin, “I’m to busy” is not a good excuse. There is no substitute for the hard work of reading around a story to get at the core issues, followed by thoughtful debate. Life is not left and right.
    Call the “journalists” the new priests and they will rape, rob, and sell you out, just like the old priests. ”Journalists” are not elected, but they are full of hot air, and like you they are human, easy to con, manipulate, terrorize, murder, and purchase. This article is telling us loud and clear that selling out readers is the purpose of metrics. The consolidated American press promotes the interests of those in power. They are the predators, readers are prey. Better metrics = better domestication. Reads aren’t paying for your writing because they don’t believe you “journalists.” That’s your problem, you’ve lost the thread, the public’s trust. Journalists? Metrics? What’s wrong with this picture? 

  • http://twitter.com/jheawood Jonathan Heawood

    Jonathan, this is a great piece. There is a wealth of literature on methodologies for evaluating impact in the charity and NGO sector. Whilst it’s relatively easy to evaluate the impact of a service delivery project, where you have face time with beneficiaries, it’s harder to evalute the impact of advocacy projects, where the benefits are by their nature more diffuse. For this reason, some of this research is potentially relevant to your thinking about the impact of journalism. See here for a good overview: http://www.evaluationinnovation.org/sites/default/files/Beer_Emerging%20Trends_FINAL.pdf.

  • Gary Ragusa

    I agree that “Ideally, a newsroom would have an integrated database connecting each story to both quantitative and qualitative indicators of impact: notes on what happened after the story was published, plus automatically collected analytics, comments, inbound links, social media discussion, and other reactions.”

    We work with digital editors at many small, medium and large news publishers including NY Daily News, Lemonde, News International, Forbes, NewsOK, CNET and many more.  In all cases, editors understand that data should empower an editor to better connect with his or her audience.  But, data by itself is useless if not put to proper use, which is why most traditional web analytics tools miss the boat.  They’re meant for analysts, and NOT editors. 

    Our company’s Editorial Decision Support Platform helps editors identify opportunities to increase user engagement and “value” (however defined) by looking at audience data and coupling it with editorial guidelines, methodology, and value objectives.  

    The result is a platform designed from the ground up to HELP editors leverage data to do what they already do.  If an editor wrote a story and is promoting it on a home page or social network, why wouldn’t they want it to be digested by the largest audience possible?  

  • Stanley Krauter

    A teacher would be fired if her lectures were as disorganized as the events the news media must investigate.  And her license would be taken away if her lectures were repeatedly interrupted by advertisements featuring sexually attractive actresses.  But the news media’s most important metric is how many readers or listeners they can attract for their advertising clients.   So doing a massive investigation of a photogenic child that was killed by her foster care parents is more important than preventing the death by communicating like a teacher.  If the news media wanted to improve their “metrics,” they would publish an annual remedial education course for voters.  These reports could include (1) metrics on a state’s foster care program  (when a photogenic child is killed the news media usually discovers two metrics of two many children per social worker and too high a turnover rate for social workers)  (2) the number of years our country is behind in infrastructure maintenance and repair (3) prison overcrowding (4) workers killed in industrial accidents due to insufficient inspections by government regulators, (5) deficits for state and local government pension funds (6) the disparity in punishment for blue collar criminals and white collar criminals,,,,
    ——- 
    ——-
    But the news media will never publish an annual remedial education course for their customers.  Even though it would be very profitable to do.  And the reporters would be used by voters like the report cards that teachers use for rewarding and punishing underperforming students.  But reporters are more interested in investigating the death of a photogenic child that preventing it.   Reporters are like cops who don’t believe in attacking the root causes of crime because that is a job for social workers.  So reporters won’t publish an annual remedial education series because that is a job for teachers.
    ——
    ——-
    Stanley Krauter
    Lincoln, Nebraska
    ——– 
    ——– 
    P.S.  No one in the news media will ever respond to my criticism because reporters are arrogant elitists. 

  • baofu422

    tinyurl.com/cyk9xz2

  • Tom Kadala

    I have been writing now for over a decade.  I don’t charge for my work because I don’t want to be held to anyone’s specific agenda.  The outcome has been mixed, if seen from a traditional metrics approach.  However, if I told you that I write to assess the world around me rather than change it, you might regard me as self-centered and all of that which comes with this type of character description.

    The irony I have seen over and over again is that the more I focus on my personal understanding of the issues, the more people want to follow me and challenge my ideas.  So for all of the advice in this article about metrics and affecting behavior, perhaps an alternative approach is to do the opposite. After all, a good discussion is no different than a gourmet meal, one which takes a long time to prepare in the kitchen and when ready can serve up a splendid exchange of ideas at any table.

    Again, you can be the judge… – http://www.tomkadala.com. 

  • Nom de plume du jour

    To measure and compare “impact” as informedness (rather than as the ‘consumer’ taking an action), why not assemble a readers’ circle for each consenting news organization, and poll them periodically, on topics that will be vs. have been covered? (On the basics, not the specifics; otherwise you just get cowed readers.)