Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
April 11, 2016, 12:03 p.m.
Audience & Social
LINK: www.theguardian.com  ➚   |   Posted by: Laura Hazard Owen   |   April 11, 2016

As most people — and most women on Twitter — know, the Internet can be an ugly and abusive place. The Guardian, which receives more than 50,000 reader comments a day, is taking steps to help change that with “The Web We Want,” a series it kicked off on Monday. As part of the series, the paper will publish its “own analysis of abuse on our site,” it said in an editorial. “Other platforms that have been reticent until now must follow suit,” it added.

From the editorial:

For the great bulk of our readers, and — yes — to respect the wellbeing of our staff too, we need to take a more proactive stance on what kind of material appears on the Guardian.

In an article on Friday, Mary Hamilton, The Guardian’s executive editor for audience, outlined some of the steps The Guardian plans to take to solve the problem. (It already employs community moderators and outlines existing participation guidelines here.)

Plenty of news organizations have removed their comments sections, “deciding that the costs outweigh the benefits, and turning to other modes of interaction instead,” she wrote. “The Guardian is not making that decision — but that means we do have to evolve and manage our comments deliberately.”

Here are some of the steps The Guardian says it will take:

We are going to be implementing policies and procedures to protect our staff from the impact of abuse and harassment online, as well as from the impact of repeatedly being exposed to traumatic images. We are also changing the process for new commenters, so that they see our community guidelines and are welcomed to the Guardian’s commenting community. On that point, we are reviewing those community standards to make them much clearer and simpler to understand, and to place a greater emphasis on respect.

We are also looking at how our moderation processes and practices work. We have already changed the structure of the moderation team to give them greater visibility and authority within the Guardian, and we are streamlining the process of reviewing moderation decisions for consistency and other factors. We’re examining our off-topic policy and will be moving to make its application more transparent.

And we are working to make our comment spaces more welcoming and more connected with our editorial work. We’re trialling different ways for journalists to be involved in conversations that can sometimes be overwhelming purely because of the volume of comments, and we’re working to make sure that we open comments and encourage conversation only where it can be well managed — and where we can listen. We have started digging deep into the data we have on how users behave in our comment threads, and will be publishing some preliminary findings next week as part of our series.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”