Nieman Foundation at Harvard
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
April 28, 2009, 9 a.m.

Rob Bertsche on how news orgs should think about copyright and reader comments online

A couple months ago, I posted a 20-minute video of our friend David Ardia at a newspaper conference we both spoke at in November. His topic was Section 230 of the Communications Decency Act of 1996 and the legal protection it provides to people who run web sites.

But David was just the first of two smart legal minds to speak that day. Rob Bertsche is one of New England’s top media lawyers, and he followed David’s talk with one that mixed the pragmatic and the philosophical.

The pragmatic: Rob explains the Digital Millennium Copyright Act of 1998, the law which governs allegations of copyright infringement online. Anyone who runs a news site runs the risk of someday being served a DMCA takedown notice, and Rob tells you what you’ll need to know to react properly. (More importantly, he’ll tell you what you probably need to do right now to prepare for that moment — a simple form I bet most of you haven’t filled out yet.)

The philosophical: He talks about the choices news organizations have to make around reader comments. How prescriptive should sites be in how they limit comments? What are the right levers to pull to get better comments? Even if the law lets you leave up comments you know to be false, should you? Rob points out that the laws governing the Internet in America give publishers significantly wider leeway than do laws in print — but does that mean you should take advantage of that leeway? The burden of those questions is shifting from the lawyers to the journalists.

Rob has generously shared his PowerPoint from that day, which includes a lot of information that he skipped over in his speech. It, along with a full transcript of his talk, is below.

I’m going to talk first about some aspects other than the [Communications Decency Act] that you want to be aware of as quickly as I can. And then I want to broaden the discussion a little bit and talk about some practical considerations.

What the CDA doesn’t cover is, as David [Ardia] said, what you write, what your employees write. It doesn’t cover the headline that you add. It doesn’t cover the categorization — if you decide to put certain comments in a category that you mark as truth, rumor, fiction, or something like that, you may then be buying yourself some liability, because that’s something that you are adding. And it doesn’t protect you — the CDA doesn’t give you that immunity if in fact you’re editing the content in a way the materially changes the meaning, and Barton Carter will have a little more to say about that in a little bit.

Another thing, though, that it doesn’t cover — and it’s important to remember — is it doesn’t cover issues of infringement of copyright or trademark. So, if one of your advertisers places an ad, a camera-ready ad in your paper, and it in fact is copied from somebody else’s creative work, they’ve infringed copyright. There is a possibility, there is a potential — and I’m not going to get into sort of all the details of it — but there’s a potential that you could be on the hook as well for contributary copyright infringement.

That’s another kind of problem, besides this libel problem that David was talking about that, could get in the way of our being able to freely allow users to post. And so Congress came up with another alphabet soup in order to keep us lawyers in business. And they came up with something called the Digital Millenium Copyright Act that addresses that. It was passed in 1998 and it has some fairly detailed procedures — not complex, but they have to be followed to the letter — of notice and takedown of copyright infringing material.

It provides a safe harbour for you and your website from a claim for damages, a claim for money awards, if you promptly remove alleged copyrighted material from your site, when you get a notification and when you have followed a bunch of statutory formalities that really do have to be followed to the letter.

One of those is that you need to — you need to actually designate someone at your organization as an agent to receive the notice of copyright infringement. You gotta fill out a form. You can get it off the copyright office web site, and here’s a copy of it. Well, here’s actually the site itself:, and they’ve got discussion of this Digital Millenium Copyright Act and the designation of agent and what you need to do. You’ll see that it says the copyright office has required that you give a designation of your agent, if you can click on the amended designation.

Here’s the form that you need to fill out and it’s a do it yourself form. If you want to you can hire me to fill it out for you and I’ll charge you $350, or you can take two and a half minutes and do it yourself, and all you’ll have to pay is the 80 dollars filing fee to the copyright office. And it simply asks that you put in who is going to be the person at your organization who’s going to receive that email that says “I have copyrighted material and somehow it got on your site, and you better take it off.” And you want to be somebody who’s going to be able to find that and know to react to that quickly, for reasons that I’ll explain in a second.

Now, I went through the other day some of the names of organizations in this room and it’s truly astounding how few of the newspapers in this room have gone to this simple step of designating an agent. This lists every single designated agent in the country from A to Z. So let’s go to one of my clients, Boston Magazine, and you’ll actually see the form that we filled out: the full legal name of the service provider, the corporate name, Metrocorp, alternate names that it goes under including all the URLs, the address, who is designated to receive the claims, address, telephone number, and most important of all the email address, because these are all going to come in by email. Put that in and you have now taken the first towards qualifying yourself from the immunity under the Digital Millennium Copyright Act.

[Q: If you’re a chain that has 50 or 100 different domains you just have to put in one of these forms?]

A: Yeah — frankly I’d want to look at the instructions carefully and if you’re a NEPA member, give us a call on the hotline and we’ll help work you through it on exactly how that ought to be done, because I’d want to be sure. That is my memory though — I mean that’s certainly the way we, that we could do it here. You do not need to be paying 80 dollars for every single sub-url.

[Q: Is the attorney working with a publisher usually the designated agent, or do publishers assign that duty to themselves or someone on their staff?]

A: Very frequently they assign it to themselves or someone on their staff, because they don’t want to pay a lawyer if one of these things come in. You do need to be aware of who you’ve designated, and if there is turnover in the organization you gotta make sure you’ve still got a good one there. You may even want to make it a sort of corporate email that’s going to travel to somebody else, so you don’t have to change the designation.

Here’s how the DMCA works. Let’s say a reader has gone to your newspaper’s web site and posted a video there that in fact is a Seinfeld episode and they got if off of YouTube — because we know it’s all out there and everybody can get it. But they actually put it onto your site. And you get a call from the folks, whoever it is that owns the copyright in the Seinfeld program and they say “You’d better damned well take that down.” Because they found their original creative work on your site.

That copyright owner sends a latter, an email to your designated agent that says take it down now. You have to expeditiously remove or disable access to the material as soon as you have gotten that notice. Interesting thing here, then — as soon as somebody has made that claim, you have the responsibility to take that down. Not to investigate — to take it down.

And at the same time, you notify your user — whoever posted that material — either the material has been removed. That gives them the opportunity to file a counternotice under the statue, and say, “Hey, wait a second — I had the right to put that Seinfeld episode up. Maybe because I got expressed written permission, or maybe because it’s a very small snippet and we’re talking about that program, or that famous episode where the black-and-white cookie comes together at the bakery — that’s my favorite — and so it’s a fair use.”

Once the poster has come back to you with that explanation, you have to pass that along to the complaining party, and they have 14 days to essentially put up or shut up. They have 14 days in which to file up a lawsuit against that poster in order to protect their rights. And if they don’t do so in those 14 days, you put that video right back up there and you do not have to continue to keep it down. So although there is an automatic take down at the beginning, it’s subject to going right back up if the other side has sort of proved its case — unless the studio in this case, the folks behind Seinfeld, are willing to put their money where their mouth is and actually bring an action of copyright infringement.

Now there are certain prequalifications to being able to have this safe harbor once like you have this agent registered with the copyright office. And the other — I don’t want to spend a lot of time with this, because it starts getting a little Talmudic — but you’re not going to be under the safe harbor if you actually know or constructively know — you are aware of facts and circumstances that make it apparent — that that’s infringing material and you are receiving some sort of financial benefit from the posting of that material.

You probably heard about the case Viacom filed against Google and YouTube — over the YouTube site including so many Seinfeld episodes, for example. Video uploads of these copyrighted programs or copyrighted films. Okay — how is that working?

Viacom has been sending a little notice to YouTube every time they notice one of these things, and then YouTube takes it down. “We got your notice, we’re going to take it down, following the procedures.” Well, you can imagine given the number of such infringing materials on the site, it becomes pretty impossible for Viacom to continue to do that on a regular basis. So they say: Look, YouTube, you’re not even qualified for that safe harbor, because you’re selling ads against the YouTube site and getting a financial benefit, and/or you are in fact aware of circumstances that make it clear to you that there is infringing content going on.

One of the issues that arises with this is the power that somebody can have when they send you that original takedown notice. We all know that entropy is the law of the world. Somebody sends you a takedown notice — that may be the end of the matter, and you, in compliance, have taken down that material from your site. And there’s some loss of information to the world as a result of that.

This is a video that was on YouTube. That’s the “Dancing Baby” video, famous in legal circles. Does anybody recognize that music that is so horribly difficult to hear in the background? It’s Prince.

So, who was it — Universal Studios, I guess, sends a takedown notice to YouTube as a result of this. They say, “You can’t play this Prince song because that’s copyrighted material.” And the poster, Ms. Lenz, says: “Oh, please. It was in the background; it was only a 20-second snippet. You can barely hear the stuff, and it’s really being used in order to illustrate how cute my little boy is and how wonderfully he jiggles his hips to this Prince song.” Clearly, they say, a fair use.

And they go further. Not only do they force the video to go back up again by actually giving that response, but they actually sue Universal and say, “You can’t do that kind of willy-nilly designation. There’s some responsibility before you call for the takedown of something to do some investigation and to think about it, and you can’t use the rights under the DMCA to take down what would otherwise be clearly protected material.”

Next subject: trademark infringement. The use of a name, a designation, an endorsement — something that in the old world or in the print world, you might be liable for if one of your readers, one of your advertisers, put something in your paper that would give trade names, that would tend to show endorsement or sponsorship of particular goods or services and would cause consumer confusion.

The same claim can still be brought. There isn’t, in fact, an immunity to that kind of claim under the CDA or under the DMCA. The claims are very rare; it’s got to be a pretty extreme case. But something to at least be aware of. And, again, those of you who have access to the hotline, if you see something like that, you might want to just give us a call and bounce it off us. In real life, you usually get the opportunity to respond once somebody complains. So a lot of this ends up being risk judgments. And the time to take it seriously is the minute you get that email or other notification that says you have unlawfully used our trademark or you have unlawfully posted our copyrighted material. You gotta be paying attention then and be ready to act quickly.

Now I want to sort of step back and get a little bit of a framework for what we’re going to be talking about for the rest of the day. Because I think there are at least four questions that we need to be thinking about as you figure out what’s best for your newspaper and for your website as to how to deal with the kind of issues that David has talked about this morning. He’s addressed — and the law to a large extent addresses — what degree of control the newspaper may exercise over reader comments.

The harder question, I think, is the one that really is in your hands and not the lawyer’s hands. That’s what degree of control ought you to exercise over your reader comments. What is your role? And I think it’s different role than we had a decade ago as journalists.

And, in a way, you know, you don’t have the lawyers to fall back on as one. In a way you’ve always, in the print world, had the opportunity to go to the lawyer, and the lawyer will tell you if it’s defamatory, you’ve gotta get rid of it, and that sort of thing. Well now, because as David said, that defamatory material, you can still keep it up — you can still keep it up if you know it’s false! You can still keep it up if you know it’s false and somebody’s complained about it! Do you want to? That’s the harder question. And what does that mean for the nature of your site, for the relationship for your readers and with your community?

And then we are going to talk about some possible techniques to deal with some of the problems that arise. And obviously you need to think about what’s best for your publication. Is it a legal question or an editorial question? I think the hardest questions in this area are the editorial questions.

You know, these are some of the things that David has talked about. That the CDA may — well, problems that you may find on your site that you need to figure out how to deal with. How do you deal with it when somebody’s put defamatory material up there? Or personal attacks? Or obscenities? What about anonymous speech? Is that something you are going to allow or not? What if you get a subpoena for the user’s identity? What if you got sock puppets on your site? And I know you don’t want jargon, but I love that phrase so much I’m just going to say it every half hour or so. But we’ll talk more in a few minutes about what that is.

What about just plain old false facts? They may not be defamatory but when people are saying that something happened at the corner of Main and Elm, and you know damned well it didn’t happen at the corner of Main and Elm, do you really want that stayin’ on your site and misleading people? Do you wanna get yourselves distanced from it in some way, so people don’t think the newspaper’s not making that mistake? How do you deal with that and how do we sort of acculturate our readers to be thinking about that? Bullying, taunting, intimidation — I’m sure, many of you have heard the tales of people being taunted over the internet. It can happen through your website too, and in fact it frequently does. What about when the reader starts criticizing your paper? This is where it gets hard to adhere to First Amendment values, right? Or your reporter wants the chance to respond to these incredibly unfair comments. I don’t know what the right answer is, but it’s something you need to think about and I hope we’ll have some dialogue about it today.

What about breaking news? I have heard some editors say you should never be breaking news in user comments. If it’s gonna break news, you are ought to turn it into a new story. I’ve heard other people say quite the opposite — that’s exactly one of the great benefits of user comments, and let’s use it and then build on it. But don’t take that away from the reader who has provided it for you.

It’s been helpful, as I’ve thought about this over the last year or so, to categorize the kinds of techniques that you might engage in if you’re so inclined. One thing is you might want to directly exercise some degree of control over the web comments. Another, is without directly exercising that control, maybe you want to do things that will encourage or persuade your readers to talk better, to be fairer, to have more civilized discourse.

Or maybe you just want to have something on the site that helps make it clear to people who look at the site: “That’s not our paper, that’s the user comment stuff. And don’t get confused between the two.” Or maybe some combination of the above.

And let me just play these out a little bit more. The first option: Direct control. One is if you moderate the post, you review the posts before they go up. Another might be to have user flagging of the posts. Maybe user flagging that will lead to automatic takedown if it reaches a certain threshold. Or maybe, user flagging that won’t lead to automatic takedown, but will lead to requiring that an editor now review that post to decide whether it ought to be allowed up there.

Maybe there are certain things you just don’t want to permit posting. One of the first ways that I got involved in this was The New London Day — is anybody here from The New London Day? They don’t need to be, because they frankly have been through this. They put on an amazing program, and a lot of the articles that came out of it are in your materials. Probably close to a year ago now, dealing with this topic. And one of the things that they had an ethicist there, they had a superintendent of schools, they had some bloggers, and everybody had different perspectives. They had a reporter who didn’t like the readers talking at all. And their editorial editor, there are a number of articles and materials that came out of that. But one of the things that came up there, was that one of the papers, I’m not sure if it was them or another one, were no longer allowing user comments onto their wedding articles, because they were finding that people were saying, you know, “Wow, she looks like a dog,” or you know, “How did a beautiful woman like her end up with a geek like him?,” and that sort of thing. And they thought that that wasn’t something that they particularly wanted to encourage.

Some newspapers have engaged in sort of moratoriums on posting. When it gets too hot, just close that post, or even pull down the whole site. And there are some articles in the materials about newspapers that have done just that. One is the Martha’s Vineyard Times.

Another idea that’s come up is the idea of community control. I’m not sure how I feel about this one, but the idea of actually having outside — people outside of your newspaper be your moderators, be the ones to judge what’s fit, and what’s not fit to be on the site. And I thought that’s an interesting idea — if we’re talking about where reader and user participation, that’s really taking it to the max.

Okay, so that’s one way: directly controlling what is up there. Another is persuasion: trying to somehow encourage people to do better, to write cleaner. One is to have user friendly rules of the road, and you’ll see we have a lot of policies in the back of your materials, different samples. There tend to be these long small-print things called privacy rules and terms and conditions, and you all have them at the bottom of your web site; nobody ever reads them and the lawyers write them, and if you actually go to read them you’ll find that it’s probably got a whole bunch of typos in it and sentences that don’t make sense, because it was written by a lawyer and nobody reads it. You’ve got to have those to some degree, because they provide you a degree of protection, and Peter will talk a little bit more about that in a few minutes.

But I think it is also really helpful to have something that I can understand and read quickly. One of you out there somewhere has something like: “You’re about to wade into the pool. If you’re going to jump into the pool, follow these rules.” Written in very, very simple language: “Be respectful. Don’t say bad things about people.” And I think it’s really helpful to have that kind of user-friendly language, in addition to the boilerplate legal stuff.

Some newspapers — and we will hear about this — turn to registration as a way to force a little bit more deliberation into the process of posting. Some bar anonymous comments, and I hope we’ll be able to have some debate whether that’s a good idea or a bad idea. One idea that’s used by some is to have users rate other users’ postings, the way you’ll see on Amazon — I think does this, or some consumer sites do this. You know, “Was this post helpful to you?” or something like that, and you can give them five stars, or one star if it was awful. With the idea that deep within each of us, we all want to get five stars, and maybe if you impose that sort of procedure, you’ll be encouraging people to go for the gold and write more civilly.

Another option is some of the newspapers take the week’s best posts and put them into the print newspaper, and that’s again another way to encourage people to reach for the stars and try to get quality discourse. Be aware, though, when you take that same post and you put it into the paper, you’ve at least created an argument — an argument that you are now going to be liable for the way you would be liable for a letter to the editor. We can argue the other side for you, but when lawyers tell you there is going to be an argument or “it’s an interesting case,” well, we all know what that means: Hold onto your wallet.

Then the third technique — we’ve got the control, we’ve got the persuasion — is disassociation: making appear somehow to readers that this is different from our news columns and even our editorial columns. One would be to physically separate the posts from the articles to which they respond and some newspapers do that. This brings up the question of do you allow reporters to respond to criticism of them? Do you correct false statements that people have made? Should you permit anonymity? You can always suspend the privileges of repeat offenders. In fact, you ought to do that. In fact under the Digital Millennium Copyright Act, the fact that you reserve the right to suspend a repeat offender is part of what keeps you under that statute’s safe harbor. That’s all I’m going to say and that’s going to end the legal-heavy part of this program.

Joshua Benton is the senior writer and former director of Nieman Lab. You can reach him via email ( or Twitter DM (@jbenton).
POSTED     April 28, 2009, 9 a.m.
Show tags
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”