Nieman Foundation at Harvard
HOME
          
LATEST STORY
Why “Sorry, I don’t know” is sometimes the best answer: The Washington Post’s technology chief on its first AI chatbot
ABOUT                    SUBSCRIBE
June 17, 2021, 2:21 p.m.
LINK: mediaengagement.org  ➚   |   Posted by: Hanaa' Tameez   |   June 17, 2021

News consumers and social media platform users prioritize the removal of hate speech over the removal of profanity, according to a new study by the Center for Media Engagement at the University of Texas, Austin.

The Center for Media Engagement partnered with Erasmus University in the Netherlands and NOVA University in Portugal to understand how the public perceives comment deletion and the moderators who do it in the United States, the Netherlands, and Portugal. They surveyed 902 people in the United States, 975 in the Netherlands, and 993 in Portugal.

In a comment template designed to look like the common commenting platform Disqus, researchers randomly showed survey participants social media posts. Here’s how they did it:

Participants were exposed to a social media post that contained either hate speech or profanity. Then they were exposed to a post from a moderator — either a human or an algorithm — that deleted the initial post because it was offensive. This message either explained specifically why the post was deleted, gave a general sense of why it was deleted with a clickable link to community guidelines for the site, or offered no explanation. Afterward, participants answered questions about how fair or legitimate the deletion was and how transparent or trustworthy the moderator was.

In all countries, survey participants thought that removing hate speech was more fair and legitimate than removing profanity, and found the moderators who did so and offered a detailed explanation of why to be more transparent. In the U.S. and the Netherlands, participants found moderators removing hate speech to be more trustworthy than those who removed profanity. While the method of removal — either by a human or an algorithm — did not impact how Americans and Dutch people felt about content removal, in Portugal, “participants perceived deletions by human moderators as more fair and legitimate compared to deletions by algorithms.”

The Center recommends the following ways to use the findings from the survey:

  • Moderators should focus more on hate speech, because people see hate speech as more in need of deletion than profanity.
  • Moderators should explain specifically why content was removed, rather than offer general explanations.
  • Algorithmic moderators may be perceived equally to human moderators, although specific cultural contexts should be considered because this may not be the case in every country.

Read the full report here.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Why “Sorry, I don’t know” is sometimes the best answer: The Washington Post’s technology chief on its first AI chatbot
“For Google, that might be failure mode…but for us, that is success,” says the Post’s Vineet Khosla
Browser cookies, as unkillable as cockroaches, won’t be leaving Google Chrome after all
Google — which planned to block third-party cookies in 2022, then 2023, then 2024, then 2025 — now says it won’t block them after all. A big win for adtech, but what about publishers?
Would you pay to be able to quit TikTok and Instagram? You’d be surprised how many would
“The relationship he has uncovered is more like the co-dependence seen in a destructive relationship, or the way we relate to addictive products such as tobacco that we know are doing us harm.”