Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
May 9, 2016, 1:17 p.m.
Audience & Social
LINK: gizmodo.com  ➚   |   Posted by: Ricardo Bilton   |   May 9, 2016

Despite its claims, Facebook has never been a completely neutral platform. Even the supposedly objective News Feed algorithm is a product of the human biases that helped create it.

The same, it seems, goes for Facebook’s trending topics, which are far from the objective reflection of popularity that Facebook claims they are, according to a new report from Gizmodo. Not only did trending topics writers inject non-trending stories into the widget, but they regularly suppressed trending stories from conservative news sites in favor of “more neutral” stories from mainstream sites, unidentified sources told Gizmodo, which wrote about Facebook’s relationship with its news curators last week.

“It was absolutely bias. We were doing it subjectively,” a former curator said. (Gizmodo’s Michael Nunez interviewed several former Facebook news curators who worked for the site between 2014 and 2015. All of them chose to remain anonymous.) Still, the Gizmodo article cautions:

Stories covered by conservative outlets (like Breitbart, Washington Examiner, and Newsmax) that were trending enough to be picked up by Facebook’s algorithm were excluded unless mainstream sites like the New York Times, the BBC, and CNN covered the same stories.

Other former curators interviewed by Gizmodo denied consciously suppressing conservative news, and we were unable to determine if left-wing news topics or sources were similarly suppressed. The conservative curator described the omissions as a function of his colleagues’ judgments; there is no evidence that Facebook management mandated or was even aware of any political bias at work.

Facebook curators also said they were discouraged from posting trending stories about Facebook itself, often having to go through several layers of management to get posts approved. Facebook, in that respect, isn’t too different from some major news organizations. Bloomberg News, for example, has a policy against covering parent company Bloomberg LP and its founder.

Reactions to the Gizmodo story have been mixed. While some it just provides more evidence of the dangers of a single massive platform having outsize control over access to information, others don’t see why it’s shocking that Facebook isn’t entirely neutral.

There’s also the question of how much responsibility Facebook has to both promote stories that it thinks that people should read and suppress stories that spread misinformation, as it’s already done with with hoaxes that go viral. In that respect, the biases of human editors might be a feature, not a bug. News organizations regularly make similar editorial judgments. As Facebook becomes an increasingly central news source for its users, there’s a lot more pressure for it to act like a legitimate one.

The problem: while all publishers constantly exercise that kind of editorial judgement, there’s considerably less comfort about Facebook doing the same.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”