Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Jan. 16, 2019, 11:37 a.m.
Audience & Social

The New York Times politics editor is building trust by tweeting context around political stories

“I wanted to start engaging with readers about our intentions behind our stories.”

You can guess the kinds of complaints The New York Times gets about its political coverage. It’s too biased, too liberal. Too much coverage of the horserace, not enough coverage of the issues. Too much “But her emails!” in 2016 and not enough Trump/Russia. Too much “Racists: They’re just like us.”

With a new personal Twitter project, Patrick Healy — the Times’ politics editor and previously a reporter covering the 2004, 2008, and 2016 campaigns — is trying to address some of those concerns by giving people a view into the paper’s decision-making process.

Healy “wanted to start engaging with readers about our intentions behind our stories,” he told me, in the hopes that more transparency — about why stories are chosen, why they’re framed a certain way, and what kinds of conversations go on between reporters and editors behind the scenes — can shore up trust in the Times’ motives.

Healy’s first Twitter thread, written from a train, was on Saturday, January 5, for a story by Lisa Lerer and Susan Chira on whether a woman will win the 2020 U.S. presidential race. In the thread, Healy talked about how he and the reporters had worked to avoid sexist or gendered thinking in the piece, and how the Times’ 2020 coverage of female candidates won’t dwell on Hillary Clinton’s 2016 loss.

Healy is doing a thread for any major political story he thinks would benefit from context and clarity about intent, “and also when I think I can provide some behind-the-scenes insight or illumination that readers might like.” He ends each one with his direct email address, asking readers to write to him with comments and feedback. The thread that’s gotten the most engagement so far was the one about racist Iowa congressman Steve King’s influence over Trump; in it, Healy explained why the Times was giving a racist a platform.

(That Times story led directly to King losing his committee seats, a House resolution condemning white supremacy, and multiple Iowa newspapers calling for his resignation.)

In another thread, Healy asked readers to suggest policy issues that they wanted to see covered.

This is a personal project for Healy, not part of a larger initiative at the Times, though it fits in with the paper’s goals to engage with readers in authentic ways. I can’t see inside Healy’s email inbox, but a glance through the replies to his tweets so far turns up quite a bit of grandstanding and random remarks like “Ffs,” “nailed it!” etc., sprinkled with constructive comments of the sort that Healy is looking for. But many of the tweets in his threads are retweeted or favorited much more often than they’re replied to, suggesting that some people are getting a benefit just from reading.

“I’ve never been a big Twitter user — while I love reading others’ tweets and learning about stories there, I’m not super-clever writing tweets and I don’t troll or argue online,” he said. “But I thought Twitter was a useful first step for explaining our story intentions.”

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Jan. 16, 2019, 11:37 a.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”