Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
June 5, 2019, 12:36 p.m.
LINK: youtube.googleblog.com  ➚   |   Posted by: Joshua Benton   |   June 5, 2019

YouTube has a post up on its official blog today entitled “Our ongoing work to tackle hate,” and “ongoing” is pretty earned there; as the post notes, YouTube “made more than 30 policy updates” in 2018. The world’s most popular video site has gotten (mostly deserved) blowback from all angles as more have come to realize the site’s power for algorithmic radicalization and its role as a community builder for all the wrong people.

Creating coherent systemwide rules around hate speech and related subjects is legitimately hard, but today’s update would seem to cut through the complexity of at least one share of it: No more Nazis. No more white supremacists. No more Sandy Hook truthers. No more Holocaust deniers.

Today, we’re taking another step in our hate speech policy by specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status. This would include, for example, videos that promote or glorify Nazi ideology, which is inherently discriminatory. Finally, we will remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.

Along with those fairly hard-and-fast rules, YouTube also says it wants to tackle the algorithmic element of the problem — the fact that its video recommendations, guided by past user engagement, often push users from relatively mainstream videos out to the darkest fringes.

In January, we piloted an update of our systems in the U.S. to limit recommendations of borderline content and harmful misinformation, such as videos promoting a phony miracle cure for a serious illness, or claiming the earth is flat. We’re looking to bring this updated system to more countries by the end of 2019. Thanks to this change, the number of views this type of content gets from recommendations has dropped by over 50% in the U.S. Our systems are also getting smarter about what types of videos should get this treatment, and we’ll be able to apply it to even more borderline videos moving forward. As we do this, we’ll also start raising up more authoritative content in recommendations, building on the changes we made to news last year. For example, if a user is watching a video that comes close to violating our policies, our systems may include more videos from authoritative sources (like top news channels) in the “watch next” panel.

This set of changes seems to echo Facebook’s declaration earlier this year that it would ban “praise, support and representation of white nationalism and white separatism on Facebook and Instagram…It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services.” That was a move that went beyond the traditional tech-company stance that a piece of content had to more directly threaten violence or otherwise break a specific policy to be removed. Facebook was saying for the first time that white nationalist content qualified for removal on its own, even without specific threats. Google, with today’s announcement, seems to be doing much the same.

It was a terrible PR morning for YouTube (“YouTube decides that homophobic harassment does not violate its policies” is not a headline you want to see about your company in Pride Month), but perhaps the afternoon can be better.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”