Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Feb. 3, 2016, 1:24 p.m.
Audience & Social
LINK: docs.google.com  ➚   |   Posted by: Shan Wang   |   February 3, 2016

If you work with podcasts, how many times have you heard complaints about the difficulty of getting accurate data on audiences and their listening habits, and the lack of an industry standard? Probably too many times to count. Is a download a listen? Were listens on a web player figured into a podcast’s total audience? And so on. (Though podcast metrics are not, as some have pointed out, worse than, say, broadcast radio measurements.)

A group of public radio staffers from stations and networks across the U.S. have been working since spring of last year on comprehensive guidelines to help improve the accuracy and reliability of podcast audience measurement in the industry as a whole, and also help generate more consistent data for potential sponsors. The fruits of their discussions were published in this document, made available Tuesday. The recommendation, the report cautions, “are not intended to operate as a full technical standard per se, but rather overall principles and public radio’s technical guidelines for measuring podcast usage.”

The document first clearly defines the “slippery label” that is podcasting, distinguishing it as a subset of the broad category of on-demand audio:

[Podcasts] consist of recurring shows or audio content collections. Measurement of downloads should include any form of on-demand, digital listening to that podcast, regardless of platform and inclusive of full episode downloads and downloads of segments of an episode. Often this is limited to audio files downloaded because they were enclosures in an RSS feed but may also include things like download links on a Web page or plays of an episode via a Web-based player.

It also encourages organizations that rely on both internal and third-party metrics to choose as the “primary source” the metrics that “adhere closest to the guidelines outlined in this document,” noting that “the guidelines presented in this document have the greatest impact when adopted by the greatest number of organizations.”

The document also gets into the nitty gritty of measurement standards, such as how to best count unique downloads:

It’s difficult to count accurately the number of downloaders: no unique ID is transmitted when requesting a podcast file; multiple downloaders can use a single IP address (such as when they are on a shared private network); one downloader can have multiple IP addresses (such as when changing cellular towers). Each downloader does transmit a user agent description which varies by software and sometimes by hardware used. The combination of IP address and user agent provide something closer to a unique identifier for a device, which is itself an approximation of a unique identifier for a downloader. Where the user agent of the requesting client is available, this will be a count of the unique combinations of IP address and user agent for the period reported. Otherwise, this will be a count of unique IP addresses for the period reported.

NPR’s Boston-based Digital Services team is working now to incorporate these guidelines into the tracking mechanisms in its Station Analytics Service, a digital metrics dashboard. Tweaks will be reflected next month.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”