Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
May 23, 2018, 11:26 a.m.
LINK: www.algorithmic.news  ➚   |   Posted by: Christine Schmidt   |   May 23, 2018

Even if automation is creeping into all corners of our lives, at least we humans can still get together in real life to talk about it.

At the Algorithms, Automation, and News conference in Munich this week, some of journalism’s biggest brainiacs shared their research on everything from bot behavior to showing your work when it’s automated to reporting through the Internet of Things. Many of academics’ relevant papers will be published in a forthcoming issue of Digital Journalism. (Full list of presenters, panelists, and papers here.)

Algorithmic accountability — reverse-engineering and reporting on the algorithms across our lives, from Facebook to Airbnb to targeted job listings — is a hot topic in journalism, but this conference focused more on the silver linings: how automation and algorithms could bolster newsrooms full of human journalists.

Here are some of the top tweets from the Munich mind-gathering:

The Associated Press’ director of information management Stuart Myles walked attendees through the AP’s process for making automation in news more transparent (hint: it includes automating transparency):

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”