Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Aug. 21, 2020, 1:28 p.m.
LINK: www.bbc.co.uk  ➚   |   Posted by: Sarah Scire   |   August 21, 2020

BBC Sport has a message for its 8.5 million followers on Twitter. The pinned tweet is part promise, part mission statement.

The BBC exists for all of us, so it should represent all of us.

That means BBC Sport covers a wide range of sports and stories. But, as we do that, our comments sections on social media can often attract hateful messages. We want our platforms to be a respectful place for discussion, constructive criticism, debate and opinion. We know the vast majority of you – our 33 million social media followers – want that too.

So here’s what we’re doing:

  • We will block people bringing hate to our comments sections;
  • We will report the most serious cases to the relevant authorities;
  • We will work to make our accounts kind and respectful places;
  • We will keep growing our coverage of women’s sports, and keep covering issues and discussions around equality in sport.

We also want your help.

If you see a reply to BBC Sport posts with an expression of hate on the basis of race, colour, gender, nationality, ethnicity, disability, religion, sexuality, sex, age or class please flag the URL to the post in question by emailing socialmoderation.sport@bbc.co.uk

Hate won’t stop us in our goal of representing all of us. Together we will strive to make our social media accounts a safe space for everyone.

The new rules, which apply across the public broadcaster’s various social media accounts, follow a new survey that showed nearly a third of elite female athletes (“sportswomen”) have been subjected to abuse on social media — double the percent reported in 2015. Racism, misogyny, and death threats were “sadly common,” according to the results.

BBC Sport producer Caroline Chapman explained the policy’s genesis to Everything in Moderation, a weekly newsletter from freelance journalist Ben Whitelaw about — you guessed it — moderating content online.

I’ve worked as a producer on the social media team for a couple of years now and while there has always been a certain amount of negativity directed towards certain subjects (mainly women’s sport), we were seeing it more and more across all our platforms and hateful comments were also appearing frequently on any post to do with race, LGBTQ+ and equality issues.
There had been a few occasions where a couple of blue tick accounts on Twitter had rightly called us out for seemingly not taking action on these comments. When the BBC Sport website surveyed over 500 elite British sportswomen, 30% said they had been trolled online. I didn’t feel like we could report on this stat and not do something to try and help the situation, so I approached BBC Sport’s editor with a plan for how we could practically tackle the issue.

BBC Sport has more than 33 million followers across its Twitter, Facebook, and Instagram accounts and Chapman said the section typically sees more than 15,000 comments per day.

That volume can make moderation tricky — especially on Twitter, the platform with “the most negativity,” according to Chapman.

The BBC’s moderation services department have access to our Facebook and Instagram accounts, and will largely hide/delete/block anything which overtly breaks our guidelines. But for technical reasons they can’t moderate Twitter for us, and it’s on this platform that we find the most negativity. Since we introduced our new stance, it has been the job of the daily producer to perform regular moderation checks on Twitter, as well as keeping across our new inbox where users can flag comments themselves. The producers also keep an eye on certain stories on the other platforms – the stories we know are likely to be a target for trolls.

You can read the BBC’s message to “social media trolls” or take a look at the full results of that British sportswomen survey.

 

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”