Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Oct. 5, 2017, 12:31 p.m.
Audience & Social
LINK: newsroom.fb.com  ➚   |   Posted by: Ricardo Bilton   |   October 5, 2017

Facebook’s outsized role in spreading fake news both during last year’s presidential election is, at this point, undeniable. Facebook has taken some responsibility, and on Thursday introduced a new feature meant to give users more context about articles while they’re reading and before they share them with others.

The new feature is small but, in Facebook’s view, significant: Facebook users will soon start seeing a small information button on news articles that appear in the News Feed. When users click the button, they’ll see a panel with information from the source site’s Wikipedia page, content related to the article in question, and details about where and how the article is being shared.

Facebook says that the goal is to give people tools to make more informed decisions about which stories to read, share, and trust. A news source without a Wikipedia page could signal that it shouldn’t be trusted as much as site that has one (though the nature of Wikipedia means that anyone can create and edit entries, making it a somewhat dubious source of authority; some legitimate small news sites may also not have Wikipedia pages at all). The related content feature will give users other takes or more context on an article they’re reading.

While it’s not clear how popular the new context feature will be with users, it will probably have some kind o impact, if only because the feature puts all of these new tools a click away. The project is a product of feedback from users and from the organizations involved in the Facebook Journalism Project, whose work we’ve covered previously.

Mark Zuckerberg initially denied claims that Facebook had a significant role in spreading fake news during the election, but he’s reversed his stance of late (sort of). At the same time as it’s introducing new tools meant to curtail the spread of fake news in the News Feed, Facebook is also cooperating with a Congressional investigation into how Russians may have used the platform’s ad tools to target specific voters and those in key swing states.

And Facebook is also realizing that it can’t always throw more tech at these problems. Facebook said this week that it plans to add 1,000 employees to the team that reviews the ads purchased on its platform.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”