Nieman Foundation at Harvard
HOME
          
LATEST STORY
Postcards and laundromat visits: The Texas Tribune audience team experiments with IRL distribution
ABOUT                    SUBSCRIBE
Feb. 3, 2016, 1:24 p.m.
Audience & Social
LINK: docs.google.com  ➚   |   Posted by: Shan Wang   |   February 3, 2016

If you work with podcasts, how many times have you heard complaints about the difficulty of getting accurate data on audiences and their listening habits, and the lack of an industry standard? Probably too many times to count. Is a download a listen? Were listens on a web player figured into a podcast’s total audience? And so on. (Though podcast metrics are not, as some have pointed out, worse than, say, broadcast radio measurements.)

A group of public radio staffers from stations and networks across the U.S. have been working since spring of last year on comprehensive guidelines to help improve the accuracy and reliability of podcast audience measurement in the industry as a whole, and also help generate more consistent data for potential sponsors. The fruits of their discussions were published in this document, made available Tuesday. The recommendation, the report cautions, “are not intended to operate as a full technical standard per se, but rather overall principles and public radio’s technical guidelines for measuring podcast usage.”

The document first clearly defines the “slippery label” that is podcasting, distinguishing it as a subset of the broad category of on-demand audio:

[Podcasts] consist of recurring shows or audio content collections. Measurement of downloads should include any form of on-demand, digital listening to that podcast, regardless of platform and inclusive of full episode downloads and downloads of segments of an episode. Often this is limited to audio files downloaded because they were enclosures in an RSS feed but may also include things like download links on a Web page or plays of an episode via a Web-based player.

It also encourages organizations that rely on both internal and third-party metrics to choose as the “primary source” the metrics that “adhere closest to the guidelines outlined in this document,” noting that “the guidelines presented in this document have the greatest impact when adopted by the greatest number of organizations.”

The document also gets into the nitty gritty of measurement standards, such as how to best count unique downloads:

It’s difficult to count accurately the number of downloaders: no unique ID is transmitted when requesting a podcast file; multiple downloaders can use a single IP address (such as when they are on a shared private network); one downloader can have multiple IP addresses (such as when changing cellular towers). Each downloader does transmit a user agent description which varies by software and sometimes by hardware used. The combination of IP address and user agent provide something closer to a unique identifier for a device, which is itself an approximation of a unique identifier for a downloader. Where the user agent of the requesting client is available, this will be a count of the unique combinations of IP address and user agent for the period reported. Otherwise, this will be a count of unique IP addresses for the period reported.

NPR’s Boston-based Digital Services team is working now to incorporate these guidelines into the tracking mechanisms in its Station Analytics Service, a digital metrics dashboard. Tweaks will be reflected next month.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Postcards and laundromat visits: The Texas Tribune audience team experiments with IRL distribution
As social platforms falter for news, a number of nonprofit outlets are rethinking distribution for impact and in-person engagement.
Radio Ambulante launches its own record label as a home for its podcast’s original music
“So much of podcast music is background, feels like filler sometimes, but with our composers, it never is.”
How uncritical news coverage feeds the AI hype machine
“The coverage tends to be led by industry sources and often takes claims about what the technology can and can’t do, and might be able to do in the future, at face value in ways that contribute to the hype cycle.”