Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Oct. 17, 2017, 12:38 p.m.
LINK: www.buzzfeed.com  ➚   |   Posted by: Shan Wang   |   October 17, 2017

Facebook groups and campaigns. Twitter ads. Ads on YouTube, on Gmail. Promotions on Pokémon Go?! And now Outbrain, one of the companies responsible for those ubiquitous content recommendations no one ever thought they needed, is investigating whether Russia-linked groups purchased ads on its platform to somehow meddle in the 2016 U.S. elections, according to a BuzzFeed News report on Tuesday.

Outbrain is “currently conducting a thorough investigation specific to election tampering and continue[s] to monitor our index,” the company said in a statement to BuzzFeed News.

“The attempt to spread misinformation that impacts elections is obviously very concerning to us,” the Outbrain statement said. “Outbrain has been proactive in combating fake content in the past.”

“After a thorough investigation, Outbrain has found no evidence of bad actors using our platform to influence elections,” Outbrain CEO Yaron Galai said in a statement forwarded to Nieman Lab on Wednesday. “Given the seriousness of this issue and out of an abundance of caution, we proactively undertook this effort over the course of many weeks to ensure we got it right. Outbrain has always been dedicated to combating fake content, and we remain vigilant in fighting those who attempt to misuse our platform.”

Other similar ad networks like Taboola and RevContent told BuzzFeed on Tuesday they’ve seen no evidence of Russia-linked ad purchases.

These companies aren’t exactly beloved by users, but they drive enough revenue that some news organizations are keeping them around. Outbrain, for instance, claims its content reaches 550 million visitors each month, and its VP of product marketing claimed to The New York Times last year that the widget has become a “No. 1 revenue provider” for “major, major publishers.” A small number of publishers including Slate and The New Yorker dumped these networks last year, but major publishers from The Washington Post to ESPN continue to use these recommendation widgets that display bad clickbait — and misinformation — on their article pages (see: “Why Doctors Will No Longer Prescribe Blood Pressure Meds,” “Angelina Jolie’s Daughter Used to Be Cute. Now She Looks Insane”).

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”