Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
May 22, 2013, 1:54 p.m.
LINK: thescoop.org  ➚   |   Posted by: Caroline O'Donovan   |   May 22, 2013

Derek Willis, interactive news developer for The New York Times, wrote a blog post about a different way to use analytics. Willis says he’s interested in tracking and mapping who is citing and quoting the work of major news outlets (like The New York Times).

The idea behind linkypedia is that links on Wikipedia aren’t just references, they help describe how digital collections are used on the Web, and encourage the spread of knowledge: “if organizations can see how their web content is being used in Wikipedia, they will be encouraged and emboldened to do more.” When I first saw it, I immediately thought about how New York Times content was being cited on Wikipedia. Because it’s an open source project, I was able to find out, and it turned out (at least back then) that many Civil War-era stories that had been digitized were linked to from the site. I had no idea, and wondered how many of my colleagues knew. Then I wondered what else we didn’t know about how our content is being used outside the friendly confines of nytimes.com.

That’s the thread that leads from Linkypedia to TweetRewrite, my “analytics” hack that takes a nytimes.com URL and feeds tweets that aren’t simply automatic retweets; it tries to filter out posts that contain the exact headline of the story to find what people say about it. It’s a pretty simple Ruby app that uses Sinatra, the Twitter and Bitly gems and a library I wrote to pull details about a story from the Times Newswire API.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”