Nieman Foundation at Harvard
HOME
          
LATEST STORY
Can you spot a fake photo online? Your level of experience online matters a lot more than contextual clues
ABOUT                    SUBSCRIBE
June 5, 2019, 12:36 p.m.
LINK: youtube.googleblog.com  ➚   |   Posted by: Joshua Benton   |   June 5, 2019

YouTube has a post up on its official blog today entitled “Our ongoing work to tackle hate,” and “ongoing” is pretty earned there; as the post notes, YouTube “made more than 30 policy updates” in 2018. The world’s most popular video site has gotten (mostly deserved) blowback from all angles as more have come to realize the site’s power for algorithmic radicalization and its role as a community builder for all the wrong people.

Creating coherent systemwide rules around hate speech and related subjects is legitimately hard, but today’s update would seem to cut through the complexity of at least one share of it: No more Nazis. No more white supremacists. No more Sandy Hook truthers. No more Holocaust deniers.

Today, we’re taking another step in our hate speech policy by specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status. This would include, for example, videos that promote or glorify Nazi ideology, which is inherently discriminatory. Finally, we will remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.

Along with those fairly hard-and-fast rules, YouTube also says it wants to tackle the algorithmic element of the problem — the fact that its video recommendations, guided by past user engagement, often push users from relatively mainstream videos out to the darkest fringes.

In January, we piloted an update of our systems in the U.S. to limit recommendations of borderline content and harmful misinformation, such as videos promoting a phony miracle cure for a serious illness, or claiming the earth is flat. We’re looking to bring this updated system to more countries by the end of 2019. Thanks to this change, the number of views this type of content gets from recommendations has dropped by over 50% in the U.S. Our systems are also getting smarter about what types of videos should get this treatment, and we’ll be able to apply it to even more borderline videos moving forward. As we do this, we’ll also start raising up more authoritative content in recommendations, building on the changes we made to news last year. For example, if a user is watching a video that comes close to violating our policies, our systems may include more videos from authoritative sources (like top news channels) in the “watch next” panel.

This set of changes seems to echo Facebook’s declaration earlier this year that it would ban “praise, support and representation of white nationalism and white separatism on Facebook and Instagram…It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services.” That was a move that went beyond the traditional tech-company stance that a piece of content had to more directly threaten violence or otherwise break a specific policy to be removed. Facebook was saying for the first time that white nationalist content qualified for removal on its own, even without specific threats. Google, with today’s announcement, seems to be doing much the same.

It was a terrible PR morning for YouTube (“YouTube decides that homophobic harassment does not violate its policies” is not a headline you want to see about your company in Pride Month), but perhaps the afternoon can be better.

Show tags Show comments / Leave a comment
 
Join the 50,000 who get the freshest future-of-journalism news in our daily email.
Can you spot a fake photo online? Your level of experience online matters a lot more than contextual clues
Whether an image looks like a random Facebook post or part of a New York Times story doesn’t make much of a difference. But your level of experience with the Internet and image editing does.
Publishers will soon no longer be able to detect when you’re in Chrome’s incognito mode, weakening paywalls everywhere
A growing number of news sites block incognito readers, figuring they’re probably trying to get around a paywall. But a change from Google will again let people reset their meter with a keystroke.
R.I.P. Quartz Brief, the innovative mobile news app. Maybe “chatting with the news” isn’t something most people really want to do?
Just because people like to chat on their phones doesn’t mean they want to chat with you, news organizations.