Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
May 13, 2016, 9 a.m.
Reporting & Production
LINK: www.gitbook.com  ➚   |   Posted by: Ricardo Bilton   |   May 13, 2016

For newsrooms, the first rule about SecureDrop is you don’t talk about SecureDrop — or not too much, anyway.

That’s clear from a new report from Columbia’s Tow Center for Journalism, which looked at how sites such as The Intercept, Gawker and ProPublica are making use of SecureDrop, the encrypted anonymous communication software maintained by The Freedom of the Press Foundation (FPF). As of writing, 14 news organizations, three journals, and eight nonprofit groups are using SecureDrop, according to FPF. Eighty organizations are waiting for the FPF to help them get it installed.

Here are a few takeaways from the report, which looks at how SecureDrop works, how it’s become a source for stories, and why publishers don’t want to talk too much about how they’re using it.

News organizations say SecureDrop is useful, the definition of usefulness varies. While most news organizations have adopted SecureDrop as a way to get new stories, news organizations say that SecureDrop is useful even if no stories come from it. Gawker editor John Cook, for example, said that, at the very least, using SecureDrop communicates Gawker’s commitment to protecting sources.

News organizations are reluctant to discuss exactly how they use SecureDrop.  In fact, out of the nine organizations profiled in the report, The Intercept is the only organization that discloses when SecureDrop is used to report stories,  as it did with its story detailing how prisons were recording inmate phone calls. Charles Berret, the report’s researcher, said that he knew that this would be a challenge going in. “Although [the lack of detailed information] is a limitation of the study, it’s great that the security of the process itself is taken so seriously by the practitioners,” he said.

Here’s Gawker’s John Cook explaining why news organizations are so skittish about sharing too much detail on how they use SecureDrop:

It’s kind of a Catch-22 in that one of the things I’ve always wanted to do is say, “Hey, we got this through SecureDrop.” But you don’t want to do that because you don’t want to do anything that would lead someone to go look if someone’s work laptop has Tor on it, or whatever might lead to suspicion.

Most news organizations designate just a few people to monitor their SecureDrop. These people, usually editors, then distribute those tips to the right reporters. This arrangement is due to the complexity of accessing SecureDrop, which can only be used via a dedicated computer in a newsroom. The average news organization doesn’t have many people capable of using the system.

Not even encrypted channels are immune to trolls and spam. Running a SecureDrop, like any other communications channel, means having to sift through plenty of spam, unhelpful news tips and conspiracy theories from well-meaning readers. Some submissions aren’t news at all. Over half of the early submissions to The New Yorker’s SecureDrop deployment weren’t sensitive leaks but rather fiction and poetry from people looking to get published in the magazine.

SecureDrop is good for the first point of contact, but reporters often switch to other channels. After receiving the initial tips, reporters will often switch to other channels, such as encrypted email, PGP, or secure chat. “SecureDrop is really just the tip of the iceberg for the story and reporting process,” Berret said.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”