Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
April 5, 2019, 10:59 a.m.
Audience & Social

“Terrorists use the internet in much the same way as other people.” How should tech companies deal with it?

Plus: YouTube executives ignores its “false, incendiary and toxic content” for years, and white nationalism sneaks through Facebook’s ban.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

How should tech companies regulate terrorist content? Brian Fishman, the policy director of counterterrorism at Facebook, published a paper in Texas National Security Review about the challenges that tech companies face in removing terrorist content from their platforms.

It’s important to understand how terrorists use the internet, Fishman writes.

Generally speaking, terrorists use the Internet in much the same way as other people: They send messages, coordinate with people, and share images and videos. The typology below attempts to describe terrorist behavior online in terms of the generic functions the underlying technology facilitates. So, instead of “attack planning” or “propaganda distribution,” the framework below uses terms like “content hosting” and “audience development.” Here’s why: Technology companies never build products to facilitate “attack planning,” but they do think about how to enable “secure communication.” To build a terminology bridge between the counter-terrorism and tech communities, we need language that speaks to how generic Internet functionality that is usually used for positive social purposes can be abused by bad actors.

Fishman comes up with the following: content hosting, audience development, brand control, secure communication, community maintenance, financing, and information collection and curation. In the case of audience development, for instance:

ISIL famously used Twitter for this purpose in 2014 and 2015 because the platform offered a vast audience for ISIL’s sophisticated propaganda and easy access to journalists who, in writing about that propaganda, served as inadvertent enablers. Terrorist groups think about audience development differently depending on their goals, their ideology, and their theory of victory. Although ISIL is ideologically rigid, it imagines itself as the vanguard of a vast populist movement, whereas ISIL’s ideological cousin, al-Qaeda, is less ideologically stringent but conceives of its near-term audience more narrowly. These differences influence the groups’ respective rhetoric but may also drive the type of digital platform each uses for developing its audience. Organizations like ISIL aim to recruit en masse, but smaller organizations looking to establish an elite core of actors may instead concentrate on audience development within a target population. Despite the glaring lack of studies comparing how terrorists use social media versus mass media, traditional mass media is likely still a critical method for conducting audience development. Nonetheless, new digital platforms are clearly useful to these groups…

Audience development requires utilizing platforms with an audience or active users already in place. Telegram, for example, has become a key tool for many terrorist organizations, but it is effectively only useful for brand control, community maintenance, and secure communication. It is not ideal for audience development or content hosting.

Fishman comes up with a list of questions that technology companies will face: How do you determine who is a terrorist? How do you come up with basic content standards? Should companies “allow some content from terrorist groups on their platforms in specific circumstances” — for instance, “in the form of political campaigning by groups like Hezbollah or the Milli Muslim League, or Sinn Fein during an earlier time period”? And how about user-level restrictions: Should some terrorists be completely banned from the platforms no matter what?

This approach is straightforward for notorious terrorists like Osama bin Laden but is more complicated when it comes to more obscure terrorists like, for example, members of the Kurdistan Worker’s Party. User-level restrictions also raise important practical questions. Should a prohibition extend only to leaders of a terrorist organization or to all members? How should those categories be defined and what is the evidentiary standard for determining whether someone falls into either category? Moreover, even in the best of circumstances, a company will not be able to create, or reasonably enforce, a comprehensive list of the world’s terrorists. Despite this final problem, establishing stringent restrictions at the user-level does offer a consistent standard for removing terrorist users on a given platform if the company becomes aware of them.

Nate Rosenblatt, who studies how terrorist groups use social media to recruit, urges: Let some researchers in, Facebook, to help you try to answer these questions.

“We may have been hemorrhaging money. But at least dogs riding skateboards never killed anyone.” Bloomberg’s Mark Bergen investigates how, for years, YouTube executives ignored employees’ “concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread,” in favor of maximizing engagement. “Five senior personnel who left YouTube and Google in the last two years privately cited the platform’s inability to tame extreme, disturbing videos as the reason for their departure,” Bergen notes. Software engineers inside the company refer to it as “bad virality.” A 2016 initiative, in development for a year, would have paid video creators based on engagement, but Google CEO Sundar Pichai ultimately turned it down “because, in part, he felt it could make the filter bubble problem worse, according to two people familiar with the exchange.” She

YouTube now has a policy for “borderline” content that doesn’t technically violate the company’s terms of service: The content can stay up, but it’s demonetized and not recommended to viewers. Motherboard’s Ben Makuch reported this week that such borderline content includes “white nationalist and neo-Nazi propaganda videos,” and it remains findable via search.

YouTube asked Motherboard to forward links to several neo-Nazi videos. The company confirmed that copies of the neo-Nazi Radio Wehrwolf show and copies of Siege are still streaming on its platform. Unlike Facebook, the company continues to allow the white nationalist podcast to use its services. YouTube told Motherboard that it didn’t take down the content sent to it by Motherboard, but that it demonetized the videos, removed comments and likes, and placed them behind a warning message.

“A discussion about immigration and ethnicity statistics.” Facebook banned white nationalism and white separation on its platform last week, but predictably, white nationalist content is still popping up there — this week, a video from Canadian white nationalist Faith Goldy. When HuffPost’s Andy Campbell showed this video to Facebook, a company spokesperson said it didn’t violate the policy:

As images of white women flash across the screen, Goldy calls on her viewers to “help us stop this race from vanishing,” asking, “Will you just walk away?” She even appears to troll Facebook’s policy, sarcastically calling herself a “staunch black nationalist” in the caption of her video and featuring a photo of a woman throwing up the OK sign, here a blatant wink to white supremacists.

Goldy’s racist propaganda would seem to represent the exact kind of content that should get someone banned under the new rules. But shown the video above, the Facebook spokesperson argued that it doesn’t promote or praise white nationalism. Instead, the spokesperson claimed, it offers a discussion about immigration and ethnicity statistics.

(You may remember Goldy as the fringe Toronto mayoral candidate endorsed by “the U.S. congressman most openly affiliated with white nationalism,” Steve King.)

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     April 5, 2019, 10:59 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”