Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
Sept. 9, 2020, 10:33 a.m.

Should the government use Section 230 to force the tech giants into paying for the news?

A new paper argues that the “26 words that created the internet” should remain in force — but only for companies that agree to certain new regulations and restrictions.

I would assume that “extort” isn’t the word that the author of a new report on Section 230 would prefer. Maybe “no carrot, all stick”? Actually, he instead uses the much nicer phrase “leverage to persuade platforms to accept a range of new responsibilities.” But either way, the pitch is pretty similar:

Hey, tech companies — that’s a nice bit of protection from legal liability you’ve got there. It would be a real shame if anything were to happen to it. Maybe if you cut a big enough check every month, we could make sure it stays nice and secure.

For those unfamiliar, Section 230 of the Communications Decency Act of 1996 is the bedrock law that allowed for the evolution of the digital world we know today, for better and for worse — the “twenty-six words that created the internet.” It says that, when it comes to legal liability, websites should be treated more like newsstands than like publishers.

In the print world, if the Daily Gazette prints something libelous about someone, that person can sue the newspaper. But they generally can’t sue the Barnes & Noble where copies of the Gazette were being sold. The thinking is that if you made a newsstand legally liable for the content of every single newspaper and magazine and book it sells…that’d be a pretty strong incentive to get out of the newsstand business. And even if a newsstand stayed open, it would likely become more boring, culling the publications it carries to a few middle-of-the-road options in order to limit its liability.

Section 230 says that an “interactive computer service” would not be “treated as the publisher or speaker of any information” that is provided by a third party — like one of its users posting a comment or sharing a link. So if you post defamatory material about your neighbor on Facebook, you are legally liable for it — but Facebook isn’t. And indeed, it would be hard for Facebook, Twitter, Google, or any other sort of digital service provider to exist in their current forms without Section 230. If they were all legally responsible for everything on their platforms, it’d be hard to imagine they’d let random users publish on them.

Section 230 also allows sites to moderate legal content without (generally) being open to litigation. In America, it’s perfectly legal to be a Nazi and to say pro-Nazi things — but if YouTube removes a pro-Nazi video, the Nazi can’t sue claiming his First Amendment rights have been violated.

There has been a lot of political hubbub about Section 230 of late; both Donald Trump and Joe Biden say they want to revoke it. Trump sees it as protecting dastardly social media companies that target conservatives and try to fact-check his tweets. Biden see it as protecting dastardly social media companies that amplify Trump’s falsehoods and extremist content.

Into this debate comes this new paper, by former Businessweek journalist Paul Barrett, now deputy director of NYU Stern’s Center for Business and Human Rights. It’s titled “Regulating Social Media: The Fight Over Section 230 — and Beyond.” It’s a good and valuable contribution, with excellent background summaries of various points of view and filled with good ideas…and one not-as-good one I’m going to complain about for a bit.

Barrett argues for a three-step approach:

1. Keep Section 230

The law has helped online platforms thrive by protecting them from most liability related to third-party posts and by encouraging active content moderation. It has been especially valuable to smaller platforms with modest legal budgets. But the benefit Section 230 confers ought to come with a price tag: the assumption of greater responsibility for curbing harmful content.

2. Improve Section 230

The measure should be amended so that its liability shield provides leverage to persuade platforms to accept a range of new responsibilities related to policing content. Internet companies may reject these responsibilities, but in doing so they would forfeit Section 230’s protection, open themselves to costly litigation, and risk widespread opprobrium.

3. Create a Digital Regulatory Agency

There’s a crisis of trust in the major platforms’ ability and willingness to superintend their sites. Creation of a new independent digital oversight authority should be part of the response. While avoiding direct involvement in decisions about content, the agency would enforce the responsibilities required by a revised Section 230.

So the threat of opening up massive legal liability should be used as “leverage to persuade platforms to accept a range of new responsibilities related to policing content” — to turn it into “a quid pro quo benefit.” What could those responsibilities be? The paper offers a few ideas (emphases mine).

One, which has been considered in the U.K. as part of that country’s debate over proposed online-harm legislation, would “require platform companies to ensure that their algorithms do not skew toward extreme and unreliable material to boost user engagement.”

Under a second, platforms would disclose data on what content is being promoted and to whom, on the process and policies of content moderation, and on advertising practices.

Platforms also could be obliged to devote a small percentage of their annual revenue to a fund supporting the struggling field of accountability journalism. This last notion would constitute a partial pay-back for the fortune in advertising dollars the social media industry has diverted from traditional news media.

I like the idea of the tech giants giving money to journalism as much as anyone. And I have no particular objection to items 1 and 3 on the paper’s to-do list. But I have to say No. 2 — making liability protection contingent on accepting other, sometimes only tangentially related policy proposals — bugs me. A few reasons:

  • Any of these ideas could become law without getting Section 230 involved. If Congress wants to tax Facebook and Google and use the proceeds to fund journalism, it can…just do that. If it believes requiring disclosure of algorithms and transparency in moderation policies are good ideas, it can pass laws to do so. If a company refuses, fine them or sue them to make them change. There’s no need to tie even the most sensible or well-intended regulations to the legal protection that has basically allowed the Internet to exist.

    Speaking of which…

  • Section 230 protects every website, not just Facebook and other giants. Every blog, every personal website, every online forum, every chat app, every app where people review restaurants or books or gadgets — they’re all able to function the way they do because of 230, which regulates “interactive computer services,” not just giant social media companies.

    Why can news sites publish reader comments? Because of Section 230. Imagine if your favorite news outlet was suddenly liable for potentially massive damages because some rando posted “John Q. Doe is a child molester!” under one of its stories. What would an outlet in that situation likely do? Kill off the comments or any other kind of public input that increases liability.

    Which is why…

  • Incentivizing regulation-via-lawsuit is a bad way to encourage good behavior. We’ve already seen, with cases like Peter Thiel killing Gawker and an Idaho billionaire targeting Mother Jones, that litigation is a powerful tool for the uber-rich to go after news sources they don’t like. Even if the suits don’t have merit, they can easily soak millions of dollars out of news companies with thin margins. Removing Section 230 protections would mean, for example, that a politician could sue a local gadfly blogger over a comment getting moderated or not.

    In other words…

  • A policy like this favors incumbents and the powerful. Facebook and Google have the profits to be able to deal with lawsuits. But if you’re a small upstart hoping to become the next big thing? Good luck paying lawyers the first time someone uploads a libelous twizzle or a faceplurp or a snapdat or whatever you call it. And frankly, different parts of the Internet should have different ideas about what content is allowable. What counts as “extreme” content on Facebook might not be what counts as “extreme” content on a niche forum.

    And finally…

  • Do you trust the government to get content moderation right — when the potential penalties for getting it wrong are so huge? Let’s say Congress passes a law saying that, in order to retain their legal protections, websites must “ensure that their algorithms do not skew toward extreme and unreliable material.” Okay — that would require a definition of “extreme and unreliable materials.” Whatever that definition is, it will mark a universe of acceptable speech that is smaller than what the First Amendment allows. And whatever that definition is, it will be up to the executive branch — whether via a new regulatory agency, the Department of Justice, or some other entity — to do rule-making around it and to enforce it.

    To be blunt: Would you trust the Trump administration to use that power well? This is a president who, just a few months ago, signed an executive order declaring it unacceptable that a Democratic congressman’s tweet “peddling the long-disproved Russian Collusion Hoax” was allowed. The order didn’t do much, practically speaking, because an executive order can’t cancel Section 230. But if whether or not Twitter had legal protections was based on an administration determination that it was not promoting “extreme and unreliable materials,” the scenario is very different.

    Literally just yesterday, Trump said Twitter should not be allowed to keep up an obviously photoshopped meme of Mitch McConnell and that “Mitch must fight back and repeal Section 230, immediately. Stop biased Big Tech before they stop you!”

    Are you really confident that a Trump appointee wouldn’t read a POTUS tweet one day and then decide that allowing a #blacklivesmatter hashtag to trend is “promoting extreme content”? The experience of the past four years has not made me eager to get the government involved in defining and regulating political speech.

Barrett’s paper acknowledges many of these problems. Here’s how it described a hypothetical world where Section 230 has been repealed:

If Section 230 were swept away tomorrow, the internet would change, and on the whole, not for the better. It would slow down drastically, as platforms, websites, and blogs looked more skeptically at content posted by users, blocking more of it. Pointed political debate might get removed. Threats of litigation against internet companies would become more common, as would take-down demands, more of which would be successful, as nervous platforms and sites tried to avoid lawsuits. The internet could become a “closed, one-way street that looks more like a broadcaster or newspaper and less like the internet we know today,” writes Jeff Kosseff in his book, The Twenty-Six Words That Created the Internet.

To be fair, in a hypothetical post-Section 230 world, some people making take-down requests targeting such harmful content as bullying or defamation would be justified, as some are today. But others would try to silence corporate whistleblowers or activists seeking to build the next #MeToo or #BlackLivesMatter movement—and these efforts at squelching valuable speech would be more likely to succeed. Silicon Valley analyst Anna Wiener depicts an internet that, above all, would be thoroughly bland: “Social-media startups might fade away, along with niche political sites, birding message boards, classifieds, restaurant reviews, support-group forums, and comments sections. In their place would be a desiccated, sanitized, corporate Internet — less like an electronic frontier than a well-patrolled office park.”

I think that’s a bad world. Barrett’s paper uses this vision to support Section 230’s continued existence. But then it advocates making Section 230 exist only for some companies and not for others, for some websites and not for others — all contingent on things like whether the government thinks you’re limiting “extreme” content in the way it would like or whether you’ve paid enough into a journalism fund.

If you like those ideas, make them into laws. Don’t turn them into an obstacle course that everyone who puts content online must navigate in order to save us from that office-park Internet. Because while Trump and Biden view Section 230 as a special gift to a few trillion-dollar companies, it’s actually a gift to all of us who want a free and open and vibrant Internet. Facebook can still make plenty of money on a “desiccated, sanitized, corporate Internet” — but we’d be worse off with one.

The tech giants need greater regulation on a host of issues. But Section 230 has become a political football for all the wrong reasons. Don’t hold the legal heart of the open web hostage in the process.

Photo by Cole Wyland.

Joshua Benton is the senior writer and former director of Nieman Lab. You can reach him via email (joshua_benton@harvard.edu) or Twitter DM (@jbenton).
POSTED     Sept. 9, 2020, 10:33 a.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”