Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Dec. 2, 2019, 1:17 p.m.
Audience & Social
LINK: www.cjr.org  ➚   |   Posted by: Christine Schmidt   |   December 2, 2019

Remember when news cycles didn’t refresh every hour? Or when digital ads weren’t tracking your every move online? Or when nation-states weren’t weaponizing the unregulated flow of information? Ethan Zuckerman of the MIT Center for Civic Media retraces those days, when the radio emerged as a publicly funded public service, and reimagines how the internet could be reshaped to “act in the service of humanity, not as an existential threat to it,” in a new report for Columbia Journalism Review.

The way we consume information today has become warped by governmental pressures and company priorities — not all that different from the days of political newspaper barons but with a much wider scope of impact. “Our national discussions about whether YouTube is radicalizing viewers, whether Facebook is spreading disinformation, and whether Twitter is trivializing political dialogue need to also consider whether we’re using the right business model to build the contemporary internet,” Zuckerman writes.

So what can be done about that? Some ideas he proposes:

A public service Web invites us to imagine services that don’t exist now, because they are not commercially viable, but perhaps should exist for our benefit, for the benefit of citizens in a democracy. We’ve seen a wave of innovation around tools that entertain us and capture our attention for resale to advertisers, but much less innovation around tools that educate us and challenge us to broaden our sphere of exposure, or that amplify marginalized voices. Digital public service media would fill a black hole of misinformation with educational material and legitimate news.

One way to avoid a world in which Google throws our presidential election would be to allow academics or government bureaucrats to regularly audit the search engine.

Can we imagine a social network designed in a different way: to encourage the sharing of mutual understanding rather than misinformation? A social network that encourages you to interact with people with whom you might have a productive disagreement, or with people in your community whose lived experience is sharply different from your own? … These networks would likely be more resilient in the face of disinformation, because the behaviors necessary for disinformation to spread—the uncritical sharing of low-quality information—aren’t rewarded on these networks the way they are on existing platforms.

What’s preventing us from building such networks? The obvious criticisms are, one, that these networks wouldn’t be commercially viable, and, two, that they won’t be widely used. The first is almost certainly true, but this is precisely why public service models exist: to counter market failures.

The two biggest obstacles to launching new social networks in 2019 are Facebook and… Facebook. It’s hard to tear users away from a platform they are already accustomed to; then, if you do gain momentum with a new social network, Facebook will likely purchase it.

We’ve grown so used to the idea that social media is damaging our democracies that we’ve thought very little about how we might build new networks to strengthen societies. We need a wave of innovation around imagining and building tools whose goal is not to capture our attention as consumers, but to connect and inform us as citizens.

The full writeup is available here.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”