Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
Oct. 25, 2021, 2 p.m.

In the ocean’s worth of new Facebook revelations out today, here are some of the most important drops

“It will be a flash in the pan. Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”

There’s still another month remaining in the Atlantic hurricane season, and over the past few days, a powerful storm developed — one with the potential to bring devastating destruction.

The pattern was familiar: a distant rumbling in some faraway locale; a warning of its potential power and path; the first early rain bands; days of tracking; frantic movements; and finally the pummeling tempest slamming into landfall.

I’m talking, of course, about Facebook. (And if any of you jackals want to point out that Facebook should be more subject to the Pacific hurricane season, I’ll note that the storm is coming overwhelmingly from the other coast.)

A Nieman Lab analysis I just did in my head has found there are as many as 5.37 gazillion new stories out today about Facebook’s various misdeeds, almost all of them based in one way or another on the internal documents leaked by company whistleblower Frances Haugen. Haugen first began leaking the documents to reporters at The Wall Street Journal for a series of stories that began last month. Then came 60 Minutes, then congressional testimony, then the SEC, and finally a quasi-consortium of some of the biggest news organizations in America.

(Actually, cut that “finally”: Haugen is at the moment in London testifying before the U.K. Parliament about the documents, with a grand tour of European capitals to follow.)

It is, a Nieman Lab investigation can also confirm, a lot to take in. Protocol is doing its best to keep track of all the new stories that came off embargo today (though some began to dribble out Friday). At this typing, their list is up to 40 consortium pieces, including work from AP, Bloomberg, CNBC, CNN, NBC News, Politico, Reuters, The Atlantic, the FT, The New York Times, The Verge, The Wall Street Journal, The Washington Post, and Wired. (For those keeping score at home, Politico leads with six stories, followed by Bloomberg with five and AP and CNN with four each.) And that doesn’t even count reporters tweeting things out directly from the leak. I read through ~all of them and here are some of the high(low?)lights — all emphases mine.

Facebook’s role in the January 6 Capitol riot was bigger than it’d like you to believe.

From The Washington Post:

Relief flowed through Facebook in the days after the 2020 presidential election. The company had cracked down on misinformation, foreign interference and hate speech — and employees believed they had largely succeeded in limiting problems that, four years earlier, had brought on perhaps the most serious crisis in Facebook’s scandal-plagued history.

“It was like we could take a victory lap,” said a former employee, one of many who spoke for this story on the condition of anonymity to describe sensitive matters. “There was a lot of the feeling of high-fiving in the office.”

Many who had worked on the election, exhausted from months of unrelenting toil, took leaves of absence or moved on to other jobs. Facebook rolled back many of the dozens of election-season measures that it had used to suppress hateful, deceptive content. A ban the company had imposed on the original Stop the Steal group stopped short of addressing dozens of look-alikes that popped up in what an internal Facebook after-action report called “coordinated” and “meteoric” growth. Meanwhile, the company’s Civic Integrity team was largely disbanded by a management that had grown weary of the team’s criticisms of the company, according to former employees.

“This is not a new problem,” one unnamed employee fumed on Workplace on Jan. 6. “We have been watching this behavior from politicians like Trump, and the — at best — wishy washy actions of company leadership, for years now. We have been reading the [farewell] posts from trusted, experienced and loved colleagues who write that they simply cannot conscience working for a company that does not do more to mitigate the negative effects on its platform.”

A company after-action report concluded that in the weeks after the election, Facebook did not act forcefully enough against the Stop the Steal movement that was pushed by Trump’s political allies, even as its presence exploded across the platform.

The documents also provide ample evidence that the company’s internal research over several years had identified ways to diminish the spread of political polarization, conspiracy theories and incitements to violence but that in many instances, executives had declined to implement those steps.

Facebook was indeed well aware of how potent a tool for radicalization it can be. From NBC News:

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

From CNN:

“Almost all of the fastest growing FB Groups were Stop the Steal during their peak growth,” the analysis says. “Because we were looking at each entity individually, rather than as a cohesive movement, we were only able to take down individual Groups and Pages once they exceeded a violation threshold. We were not able to act on simple objects like posts and comments because they individually tended not to violate, even if they were surrounded by hate, violence, and misinformation.”

This approach did eventually change, according to the analysis — after it was too late.
“After the Capitol insurrection and a wave of Storm the Capitol events across the country, we realized that the individual delegitimizing Groups, Pages, and slogans did constitute a cohesive movement,” the analysis says.

When Facebook executives posted messages publicly and internally condemning the riot, some employees pushed back, even suggesting Facebook might have had some culpability.

“There were dozens of Stop the Steal groups active up until yesterday, and I doubt they minced words about their intentions,” one employee wrote in response to a post from Mike Schroepfer, Facebook’s chief technology officer.

Another wrote, “All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence? We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control.”

Other Facebook employees went further, claiming decisions by company leadership over the years had helped create the conditions that paved the way for an attack on the US Capitol.

Responding to Schroepfer’s post, one staffer wrote that, “leadership overrides research based policy decisions to better serve people like the groups inciting violence today. Rank and file workers have done their part to identify changes to improve our platforms but have been actively held back.”

One important source of political agitation: SUMAs. From Politico:

Facebook has known for years about a major source of political vitriol and violent content on its platform and done little about it: individual people who use small collections of accounts to broadcast reams of incendiary posts.

Meet SUMAs: a smattering of accounts run by a single person using their real identity, known internally at Facebook as Single User Multiple Accounts. And a significant swath of them spread so many divisive political posts that they’ve mushroomed into a massive source of the platform’s toxic politics, according to internal company documents and interviews with former employees.

While plenty of SUMAs are harmless, Facebook employees for years have flagged many such accounts as purveyors of dangerous political activity. Yet, the company has failed to crack down on SUMAs in any comprehensive way, the documents show. That’s despite the fact that operating multiple accounts violates Facebook’s community guidelines.

Company research from March 2018 said accounts that could be SUMAs were reaching about 11 million viewers daily, or about 14 percent of the total U.S. political audience. During the week of March 4, 2018, 1.6 million SUMA accounts made political posts that reached U.S. users.

Through it all, Facebook has retained its existential need to be seen as nonpartisan — seen being the key word there, since perception and reality often don’t align when it comes to the company. From The Washington Post:

Ahead of the 2020 U.S. election, Facebook built a “voting information center” that promoted factual information about how to register to vote or sign up to be a poll worker. Teams at WhatsApp wanted to create a version of it in Spanish, pushing the information proactively through a chat bot or embedded link to millions of marginalized voters who communicate regularly through WhatsApp. But Zuckerberg raised objections to the idea, saying it was not “politically neutral,” or could make the company appear partisan, according to a person familiar with the project who spoke on the condition of anonymity to discuss internal matters, as well as documents reviewed by The Post.

(Will you allow me a brief aside to highlight some chef’s-kiss PR talk?)

This related Post story from Friday includes not one, not two, but three of the most remarkable non-denial denials I’ve read recently, all from Facebook PR. Lots of chest-puffing without ever actually saying “Your factual claim is false”:

As the company sought to quell the political controversy during a critical period in 2017, Facebook communications official Tucker Bounds allegedly said, according to the affidavit, “It will be a flash in the pan. Some legislators will get pissy. And then in a few weeks they will move onto something else. Meanwhile we are printing money in the basement, and we are fine.”

Bounds, now a vice president of communications, said in a statement to The Post, ❶ “Being asked about a purported one-on-one conversation four years ago with a faceless person, with no other sourcing than the empty accusation itself, is a first for me.”

Facebook spokeswoman Erin McPike said in a statement, ❷ “This is beneath the Washington Post, which during the last five years competed ferociously with the New York Times over the number of corroborating sources its reporters could find for single anecdotes in deeply reported, intricate stories. It sets a dangerous precedent to hang an entire story on a single source making a wide range of claims without any apparent corroboration.”

The whistleblower told The Post of an occasion in which Facebook’s Public Policy team, led by former Bush administration official Joel Kaplan, defended a “white list” that exempted Trump-aligned Breitbart News, run then by former White House strategist Stephen K. Bannon, and other select publishers from Facebook’s ordinary rules against spreading false news reports.

When a person in the video conference questioned this policy, Kaplan, the vice president of global policy, responded by saying, “Do you want to start a fight with Steve Bannon?” according to the whistleblower in The Post interview.

Kaplan, who has been criticized by former Facebook employees in previous stories in The Post and other news organizations for allegedly seeking to protect conservative interests, said in a statement to The Post, ❸ “No matter how many times these same stories are repurposed and re-told, the facts remain the same. I have consistently pushed for fair treatment of all publishers, irrespective of ideological viewpoint, and advised that analytical and methodological rigor is especially important when it comes to algorithmic changes.”

If you think Facebook does a bad job moderating content here, it’s worse almost everywhere else.

This was a major theme in stories across outlets. The New York Times:

On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.

For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site.

The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the Facebook researcher wrote.

“The test user’s News Feed has become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”

With 340 million people using Facebook’s various social media platforms, India is the company’s largest market. And Facebook’s problems on the subcontinent present an amplified version of the issues it has faced throughout the world, made worse by a lack of resources and a lack of expertise in India’s 22 officially recognized languages.

Eighty-seven percent of the company’s global budget for time spent on classifying misinformation is earmarked for the United States, while only 13 percent is set aside for the rest of the world — even though North American users make up only 10 percent of the social network’s daily active users, according to one document describing Facebook’s allocation of resources.

From Politico:

In late 2020, Facebook researchers came to a sobering conclusion. The company’s efforts to curb hate speech in the Arab world were not working. In a 59-page memo circulated internally just before New Year’s Eve, engineers detailed the grim numbers.

Only six percent of Arabic-language hate content was detected on Instagram before it made its way onto the photo-sharing platform owned by Facebook. That compared to a 40 percent takedown rate on Facebook.

Ads attacking women and the LGBTQ community were rarely flagged for removal in the Middle East. In a related survey, Egyptian users told the company they were scared of posting political views on the platform out of fear of being arrested or attacked online.

In Iraq, where violent clashes between Sunni and Shia militias were quickly worsening an already politically fragile country, so-called “cyber armies” battled it out by posting profane and outlawed material, including child nudity, on each other’s Facebook pages in efforts to remove rivals from the global platform.

From the AP:

An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.

In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

(Facebook generated $85.9 billion in revenue last year, with a profit margin of 38%.)

For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.

Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.

He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.

Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.

But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.

From CNN:

Facebook employees repeatedly sounded the alarm on the company’s failure to curb the spread of posts inciting violence in “at risk” countries like Ethiopia, where a civil war has raged for the past year, internal documents seen by CNN show…

They show employees warning managers about how Facebook was being used by “problematic actors,” including states and foreign organizations, to spread hate speech and content inciting violence in Ethiopia and other developing countries, where its user base is large and growing. Facebook estimates it has 1.84 billion daily active users — 72% of which are outside North America and Europe, according to its annual SEC filing for 2020.

The documents also indicate that the company has, in many cases, failed to adequately scale up staff or add local language resources to protect people in these places.

So which are the countries Facebook does care about, if “care” is not a horribly misused term here? From The Verge:

In a move that has become standard at the company, Facebook had sorted the world’s countries into tiers.

Brazil, India, and the United States were placed in “tier zero,” the highest priority. Facebook set up “war rooms” to monitor the network continuously. They created dashboards to analyze network activity and alerted local election officials to any problems.

Germany, Indonesia, Iran, Israel, and Italy were placed in tier one. They would be given similar resources, minus some resources for enforcement of Facebook’s rules and for alerts outside the period directly around the election.

In tier two, 22 countries were added. They would have to go without the war rooms, which Facebook also calls “enhanced operations centers.”

The rest of the world was placed into tier three. Facebook would review election-related material if it was escalated to them by content moderators. Otherwise, it would not intervene.

“Tier Three” must be the new “Third World.”

The kids fled Facebook long ago, but now they’re fleeing Instagram too.

Also: “Most [young adults] perceive Facebook as place for people in their 40s or 50s…perceive content as boring, misleading, and negative…perceive Facebook as less relevant and spending time on it as unproductive…have a wide range of negative associations with Facebook including privacy concerns, impact to their wellbeing, along with low awareness of relevant services.” Otherwise, they love it.

From The Verge:

Earlier this year, a researcher at Facebook shared some alarming statistics with colleagues.

Teenage users of the Facebook app in the US had declined by 13 percent since 2019 and were projected to drop 45 percent over the next two years, driving an overall decline in daily users in the company’s most lucrative ad market. Young adults between the ages of 20 and 30 were expected to decline by 4 percent during the same time frame. Making matters worse, the younger a user was, the less on average they regularly engaged with the app. The message was clear: Facebook was losing traction with younger generations fast.

Facebook’s struggle to attract users under the age of 30 has been ongoing for years, dating back to as early as 2012. But according to the documents, the problem has grown more severe recently. And the stakes are high. While it famously started as a networking site for college students, employees have predicted that the aging up of the app’s audience — now nearly 2 billion daily users — has the potential to further alienate young people, cutting off future generations and putting a ceiling on future growth.

The problem explains why the company has taken such a keen interest in courting young people and even pre-teens to its main app and Instagram, spinning up dedicated youth teams to cater to them. In 2017, it debuted a standalone Messenger app for kids, and its plans for a version of Instagram for kids were recently shelved after lawmakers decried the initiative.

Instagram was doing better with young people, with full saturation in the US, France, the UK, Japan, and Australia. But there was still cause for concern. Posting by teens had dropped 13 percent from 2020 and “remains the most concerning trend,” the researchers noted, adding that the increased use of TikTok by teens meant that “we are likely losing our total share of time.”

Apple was close to banning Facebook and Instagram from the App Store because of how it was being used for human trafficking.

From CNN:

Facebook has for years struggled to crack down on content related to what it calls domestic servitude: “a form of trafficking of people for the purpose of working inside private homes through the use of force, fraud, coercion or deception,” according to internal Facebook documents reviewed by CNN.

The company has known about human traffickers using its platforms in this way since at least 2018, the documents show. It got so bad that in 2019, Apple threatened to pull Facebook and Instagram’s access to the App Store, a platform the social media giant relies on to reach hundreds of millions of users each year. Internally, Facebook employees rushed to take down problematic content and make emergency policy changes avoid what they described as a “potentially severe” consequence for the business.

But while Facebook managed to assuage Apple’s concerns at the time and avoid removal from the app store, issues persist. The stakes are significant: Facebook documents describe women trafficked in this way being subjected to physical and sexual abuse, being deprived of food and pay, and having their travel documents confiscated so they can’t escape. Earlier this year, an internal Facebook report noted that “gaps still exist in our detection of on-platform entities engaged in domestic servitude” and detailed how the company’s platforms are used to recruit, buy and sell what Facebook’s documents call “domestic servants.”

Last week, using search terms listed in Facebook’s internal research on the subject, CNN located active Instagram accounts purporting to offer domestic workers for sale, similar to accounts that Facebook researchers had flagged and removed. Facebook removed the accounts and posts after CNN asked about them, and spokesperson Andy Stone confirmed that they violated its policies.

And from AP:

After publicly promising to crack down, Facebook acknowledged in internal documents obtained by The Associated Press that it was “under-enforcing on confirmed abusive activity” that saw Filipina maids complaining on the social media site of being abused. Apple relented and Facebook and Instagram remained in the app store.

But Facebook’s crackdown seems to have had a limited effect. Even today, a quick search for “khadima,” or “maids” in Arabic, will bring up accounts featuring posed photographs of Africans and South Asians with ages and prices listed next to their images. That’s even as the Philippines government has a team of workers that do nothing but scour Facebook posts each day to try and protect desperate job seekers from criminal gangs and unscrupulous recruiters using the site.

If you see an antitrust regulator smiling today, this is why.

From Politico:

Facebook likes to portray itself as a social media giant under siege — locked in fierce competition with rivals like YouTube, TikTok and Snapchat, and far from the all-powerful goliath that government antitrust enforcers portray.

But internal documents show that the company knows it dominates the arenas it considers central to its fortunes.

Previously unpublished reports and presentations collected by Facebook whistleblower Frances Haugen show in granular detail how the world’s largest social network views its power in the market, at a moment when it faces growing pressure from governments in the U.S., Europe and elsewhere. The documents portray Facebook employees touting its dominance in their internal presentations — contradicting the company’s own public assertions and providing potential fuel for antitrust authorities and lawmakers scrutinizing the social network’s sway over the market.

And, of course, the Ben Smith meta-media look at it all.

Frances Haugen first met Jeff Horwitz, a tech-industry reporter for The Wall Street Journal, early last December on a hiking trail near the Chabot Space & Science Center in Oakland, Calif.

She liked that he seemed thoughtful, and she liked that he’d written about Facebook’s role in transmitting violent Hindu nationalism in India, a particular interest of hers. She also got the impression that he would support her as a person, rather than as a mere source who could supply him with the inside information she had picked up during her nearly two years as a product manager at Facebook.

“I auditioned Jeff for a while,” Ms. Haugen told me in a phone interview from her home in Puerto Rico, “and one of the reasons I went with him is that he was less sensationalistic than other choices I could have made.”

In the last two weeks [the news organizations] have gathered on the messaging app Slack to coordinate their plans — and the name of their Slack group, chosen by [beloved former Nieman Labber] Adrienne LaFrance, the executive editor of The Atlantic, suggests their ambivalence: “Apparently We’re a Consortium Now.”

Inside the Slack group, whose messages were shared with me by a participant, members have reflected on the strangeness of working, however tangentially, with competitors. (I didn’t speak to any Times participants about the Slack messages.)

“This is the weirdest thing I have ever been part of, reporting-wise,” wrote Alex Heath, a tech reporter for The Verge.

Original image of Hurricane Ida by NASA and Mark Zuckerberg drawing by Paul Chung used under a Creative Commons license.

Joshua Benton is the senior writer and former director of Nieman Lab. You can reach him via email (joshua_benton@harvard.edu) or Twitter DM (@jbenton).
POSTED     Oct. 25, 2021, 2 p.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”