Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Aug. 11, 2014, 3:50 p.m.
Reporting & Production
LINK: jezebel.com  ➚   |   Posted by: Justin Ellis   |   August 11, 2014

Continuing its tradition of airing its internal discussions outside the office, the staff at Jezebel today called out the higher-ups at parent Gawker Media today over some pretty disgusting trolling at the site.

Dealing with commenters of all stripes is a issue at many media companies, and foiling trolls is a constant problem. Online harassment has become sadly commonplace for female writers online, but at Jezebel things have gotten pretty egregious:

For months, an individual or individuals has been using anonymous, untraceable burner accounts to post gifs of violent pornography in the discussion section of stories on Jezebel. The images arrive in a barrage, and the only way to get rid of them from the website is if a staffer individually dismisses the comments and manually bans the commenter. But because IP addresses aren’t recorded on burner accounts, literally nothing is stopping this individual or individuals from immediately signing up for another, and posting another wave of violent images (and then bragging about it on 4chan in conversations staffers here have followed, which we’re not linking to here because fuck that garbage). This weekend, the user or users have escalated to gory images of bloody injuries emblazoned with the Jezebel logo. It’s like playing whack-a-mole with a sociopathic Hydra.

Banning and blocking is typically the last line of defense for staffers who have to deal with comments. This is where Kinja, Gawker’s publishing and discussion platform, has a strength that is also a weakness: The system is built for — and in some cases encourages — anonymity. “Burner” accounts were envisioned as the next evolution of the tip line, a way of surfacing information from readers who don’t want to leave a trace of identity.

That feature seems to be what is causing the ongoing GIF abuse on Jezebel:

During the last staff meeting, when the subject was broached, we were told that there were no plans to enable the blocking of IP addresses, no plans to record IP addresses of burner accounts. Moderation tools are supposedly in development, but change is not coming fast enough.

To say that Kinja is important to the future of Gawker would be an understatement. The publishing/discussion/tipster platform, or something like it, has been a white whale for Gawker founder Nick Denton.

Denton has said repeatedly that Kinja is a vehicle for putting readers (and their writing) on equal footing as writers. The most recent example of that being Disputations, which opens a window into the day-to-day conversations of Gawker staff.

As recently as June, Gawker staff were still bringing up issues with Kinja, and Denton reportedly said he underestimated the time and resources it would take to build out the platform. Not surprisingly, this caught the attention of Groupthink, a Kinja blog spun off by Jezebel readers, who have been trying to find workarounds for the troll campaign.

According to Business Insider, Gawker editorial director Joel Johnson acknowledged the problem, but said a solution isn’t available just yet. Johnson told the site he’s not sure the anonymity Kinja provides is the issue:

We want to make sure that all readers can submit tips anonymously; security and anonymity are import to our vision of Kinja. I don’t know that this boils down to that exactly, so much as it boils down to my as-yet inability to figure out how to filter image-based posts without at least one human seeing them. (Other sites or apps use hired proxies to sort through those submissions, which also seems suboptimal.)

Nevertheless, I agree with the Jezebel staff that I haven’t done enough to figure out a solution to this problem (a problem I don’t have to deal with on a daily basis, while they do) and I’m proud to work with people who aren’t afraid to call out my mistakes in public.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”