Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Nov. 1, 2017, 9:38 a.m.
Audience & Social
LINK: twitter.com  ➚   |   Posted by: Laura Hazard Owen   |   November 1, 2017

During Tuesday’s terror attacks in Lower Manhattan, in which a driver in a pickup truck killed eight people and injured 11, Snap Maps, the location-sharing feature that Snapchat introduced this summer, proved to be an effective way to get real-time information on what was happening.

Tuesday wasn’t the first tragedy in which Snap Maps proved a reliable source of information. Quartz’s Mike Murphy wrote last month about the tool’s role in covering the Las Vegas shootings, hurricanes in the U.S. and Mexico, and the Mexico City earthquake — and why it can be more useful and more intimate than coverage of breaking events on Facebook and Twitter.

People opening Periscope or Twitter are expecting to broadcast their stories, whereas on Snapchat, you’re assuming only a few people might ever see whatever you post, unless something profound happens. And popular platforms like Twitter and Facebook are great for firing off quick messages or images, but there’s no easy way for a user to check everything that’s happening in an area, unless they follow specific hashtags, or know how to perform advanced searches. On Snapchat, you open the app, pinch in to see the map, and point to the part of the world you want to see.

As a result, for those who use it, Snap Maps has become a deeply intimate way to view major news events in real time.

Snapchat’s algorithm decides what makes it onto the public map: “We have automated systems that decide what makes it onto the Map, based on a bunch of factors — like when and where a Snap was taken, if an event seems to be happening nearby, etc.”

There are, of course, some annoying and jarring things about following a serious event in this way (the stickers! the emoji! is that the crying laughing emoji?).

But watching event coverage on, say, cable news certainly exposes one to inane commentary as well, and at least this is immediate — and also authentic (at least for now).

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”