Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
May 20, 2013, noon

What kind of response do we want readers to have? When you build an informative and elegant visualization, how are you hoping they’ll react?

These are questions that Amanda Cox of The New York Times’ graphics desk asks herself on a regular basis. In a recent analysis of their popularity on social media, Cox tried to locate what makes a graphic popular.

1. “development.really.hard”
2. “big.breaking.news.big.breaking.news.adjacent”
3. “useful”
4. “explicitly.emotional…atmospheric”
5. “surprise.reveal”
6. “comprehensive”

Unsurprisingly “difficult” topics — mostly related to war, violence, climate change, and other highly complex issues — performed least well, but “takeaway” pieces with an obvious message also performed poorly as a class. In contrast, visualizations that requires extensive technical resources tended to perform particularly well, as did features Cox classed as emotional and useful — and, of course, those closely tied to breaking news.

In the wrap-up of her analysis, Cox considered the problem of indicating importance to the paper’s readership across platforms: “How do you signal that something is important? You do that by using the resource that is scarce.” In print, the Times can use scarcity to indicate importance by giving an important graphic a desirable spot on a “good page.” On the web, the equivalent scarce resource isn’t placement, but the allocation of valuable internal tech/development hours.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”