Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
July 25, 2013, 4:57 p.m.
LINK: cyber.law.harvard.edu  ➚   |   Posted by: Joshua Benton   |   July 25, 2013

Our friends at the Berkman Center — specifically, Yochai Benkler, Hal Roberts, Rob Faris, Alicia Solow-Niederman, and Bruce Etling — are out with a new report that tries to map the spread of conversation around SOPA/PIPA last year:

In this paper, we use a new set of online research tools to develop a detailed study of the public debate over proposed legislation in the United States that was designed to give prosecutors and copyright holders new tools to pursue suspected online copyright violations. Our study applies a mixed-methods approach by combining text and link analysis with human coding and informal interviews to map the evolution of the controversy over time and to analyze the mobilization, roles, and interactions of various actors.

This novel, data-driven perspective on the dynamics of the networked public sphere supports an optimistic view of the potential for networked democratic participation, and offers a view of a vibrant, diverse, and decentralized networked public sphere that exhibited broad participation, leveraged topical expertise, and focused public sentiment to shape national public policy.

Full paper here. Super nifty visualization here.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”