Nieman Foundation at Harvard
HOME
          
LATEST STORY
Don’t click this: When should news organizations use “nofollow” links?
ABOUT                    SUBSCRIBE
Oct. 2, 2015, 12:34 p.m.
Audience & Social

News organizations must treat reader comments with the same level of consideration that they treat their own stories, New York Times community editor Bassey Etim said today speaking on a panel at this year’s Computation + Journalism Symposium at Columbia University.

“We have to treat comments as content,” Etim said. “We can’t cede the social world to large companies.”

Etim, speaking on a panel about comment moderation and community building, discussed the Times’ attitude toward commenters and shared the results of a Times survey that asked commenters why they comment:

Only 5 percent of Times commenters said they comment on stories to actually communicate with other, and Etim said that most readers prefer the comments that Times editors choose to highlight. News organizations, he said, need to make building community around news more of a priority. (Though, of course, that’s easy for an editor from the Times to say when, unlike most news organizations, it has a full-time staff dedicated to moderating comments.)

The symposium is sponsored by Columbia’s Brown Institute for Media Innovation and it continues through Saturday. If you’re not in New York, you can follow along on Twitter using #CJ2015 or you can watch a livestream, which we’ve embedded below.

Show tags Show comments / Leave a comment
 
Join the 50,000 who get the freshest future-of-journalism news in our daily email.
Don’t click this: When should news organizations use “nofollow” links?
Plus, a new free course for online fact-checking taught via workspace app Notion.
One potential route to flagging fake news at scale: Linguistic analysis
It’s not perfect, but legitimate and faked news articles use language differently in ways that can be detected algorithmically: “On average, fake news articles use more expressions that are common in hate speech, as well as words related to sex, death, and anxiety.”
Finally, Instagram is getting fact-checked (in a limited way and just in the U.S., for now)
“The potential to prevent harm is high here, particularly with the widespread existence of health misinformation on the platform.”