Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Aug. 29, 2013, 2:26 p.m.
LINK: adage.com  ➚   |   Posted by: Joshua Benton   |   August 29, 2013

Of course, Gawker would probably argue that “comments section” isn’t the right frame for thinking about its all-content-has-status platform Kinja. Alex Kantrowitz in Ad Age:

Sometime next Wednesday, celebrity scientist Bill Nye will take a seat in front of a computer and invite the internet to ask him whatever it wants. But he won’t be taking the questions on Reddit, a medium famous for its “Ask Me Anything” sessions. Rather, Mr. Nye will be operating within the comments section of Gizmodo, a Gawker Media website on a page sponsored by State Farm. The entire interaction, from start to finish, will be an ad.

Mr. Nye’s Q&A is part of a new “native” ad format that Gawker has been trying this year. The company is working with advertisers to host sponsored discussion sessions on its Kinja commenting platform, hoping to turn its community into an engaged audience its advertisers can tap into…

The campaign’s goal, Mr. Del said, is to drive home a message that a State Farm agent is a trusted adviser. And making scientists available to chat with consumers, he said, is a good way to do it. “Where else can they convey the idea that when you rely on State Farm, you’re not just getting a canned response, you’re getting an agent?” said Mr. Del.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”