Nieman Foundation at Harvard
HOME
          
LATEST STORY
Newsweek is making generative AI a fixture in its newsroom
ABOUT                    SUBSCRIBE
March 29, 2017, 10 a.m.
Audience & Social
LINK: www.pewinternet.org  ➚   |   Posted by: Joseph Lichterman   |   March 29, 2017

The quality of public discourse online is not going to get better and may actually get worse over the next decade, according to a survey released Wednesday by the Pew Research Center that invited 8,000 technology experts, scholars, corporate practitioners and government leaders” to respond.

Forty-two percent of the 1,537 participants said they anticipate “no major change” in levels of online trolling and other harmful behavior that is found online. Another 39 percent said the next decade will be “more shaped” by these types of online behaviors.

“While respondents expressed a range of opinions from deep concern to disappointment to resignation to optimism, most agreed that people — at their best and their worst — are empowered by networked communication technologies,” the study’s authors wrote. “Some said the flame wars and strategic manipulation of the zeitgeist might just be getting started if technological and human solutions are not put in place to bolster diverse civil discourse.”

Pew and Elon University’s Imagining the Internet Center conducted the survey between July 1 and August 12, 2016, before the height of the divisive U.S. election.

The report categorizes responses into four primary themes that outline what the future of online discourse might hold:

“Things will stay bad because to troll is human; anonymity abets anti-social behavior; inequities drive at least some inflammatory dialogue; and the growing scale and complexity of internet discourse makes this difficult to defeat.”

Many respondents think things will just get worse as humans continue to evolve to a relatively new medium.

“I would very much love to believe that discourse will improve over the next decade, but I fear the forces making it worse haven’t played out at all yet,” technology consultant Jerry Michalski said. “After all, it took us almost 70 years to mandate seatbelts. And we’re not uniformly wise about how to conduct dependable online conversations, never mind debates on difficult subjects. In that long arc of history that bends toward justice, particularly given our accelerated times, I do think we figure this out. But not within the decade.”

“We see a dark current of people who equate free speech with the right to say anything, even hate speech, even speech that does not sync with respected research findings,” an anonymous MIT professor said. “They find in unmediated technology a place where their opinions can have a multiplier effect, where they become the elites.”

“Things will stay bad because tangible and intangible economic and political incentives support trolling. Participation = power and profits.”

The social media ecosystem is attention-driven; the platforms themselves make money from advertising and, as a result, want to continue to drive participation. And because the platforms are so crowded, it’s often the loudest voices that get the most attention, which carries over into our larger political debates.

“Distrust and trolling is happening at the highest levels of political debate, and the lowest,” said researcher Kate Crawford. “The Overton Window has been widened considerably by the 2016 U.S. presidential campaign, and not in a good way. We have heard presidential candidates speak of banning Muslims from entering the country, asking foreign powers to hack former White House officials, retweeting neo-Nazis. Trolling is a mainstream form of political discourse.”

And as social media’s influence has grown, traditional media outlets have seen their influence wane. Here’s how Steven Waldman, the founder and CEO of LifePosts, explained it:

“It certainly sounds noble to say the internet has democratized public opinion. But it’s now clear: It has given voice to those who had been voiceless because they were oppressed minorities and to those who were voiceless because they are crackpots. … It may not necessarily be ‘bad actors’ — i.e., racists, misogynists, etc. — who win the day, but I do fear it will be the more strident.”

“Things will get better because technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence.”

Some respondents were more optimistic that the levels of online discourse would improve over the next decade. Artificial intelligence and other technological improvements will help improve dialogue, some said.

“I expect we will develop more social bots and algorithmic filters that would weed out the some of the trolls and hateful speech,” Marina Gorbis, executive director of the Institute for the Future, said. “I expect we will create bots that would promote beneficial connections and potentially insert context-specific data/facts/stories that would benefit more positive discourse. Of course, any filters and algorithms will create issues around what is being filtered out and what values are embedded in algorithms.”

Additionally, as platforms become more influenced by algorithms, respondents expect to see continued fragmentation of the online ecosystem.

“There will still be some places where you can find those with whom to argue, but they will be more concentrated into only a few locations than they are now,” senior design researcher Lindsay Kenzig said.

“Oversight and community moderation come with a cost. Some solutions could further change the nature of the internet because surveillance will rise; the state may regulate debate; and these changes will polarize people and limit access to information and free speech.”

Respondents also expressed concern that increased regulation of online spaces could result in surveillance and censorship. They also worried that people would begin to change their positive online behaviors as surveillance increase.

Rebecca MacKinnon, director of the Ranking Digital Rights project at the New America foundation, said she’s worried about the state of free speech online:

“The demands for governments and companies to censor and monitor internet users are coming from an increasingly diverse set of actors with very legitimate concerns about safety and security, as well as concerns about whether civil discourse is becoming so poisoned as to make rational governance based on actual facts impossible. I’m increasingly inclined to think that the solutions, if they ever come about, will be human/social/political/cultural and not technical.”

Queensland University of Technology professor Marcus Foth warned that the increased regulation of online speech could result in polarization and filter bubbles:

“With less anonymity and less diversity, the two biggest problems of the Web 1.0 era have been solved from a commercial perspective: fewer trolls who can hide behind anonymity. Yet what are we losing in the process? Algorithmic culture creates filter bubbles, which risk an opinion polarization inside echo chambers.”

The full report, in which you are certain to find at least one opinion you agree with, s available here.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Newsweek is making generative AI a fixture in its newsroom
The legacy publication is leaning on AI for video production, a new breaking news team, and first drafts of some stories.
Rumble Strip creator Erica Heilman on making independent audio and asking people about class
“I only make unimportant things now, but it’s all the unimportant things that really make up our lives.”
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”