Nieman Foundation at Harvard
HOME
          
LATEST STORY
Seeking “innovative,” “stable,” and “interested”: How The Markup and CalMatters matched up
ABOUT                    SUBSCRIBE
Sept. 21, 2017, 9:59 a.m.
Reporting & Production

What are the most useful ways to bring artificial intelligence into newsrooms? How can journalists use it in their reporting process? Is it going to replace newsroom jobs?

A report out this week from the Tow Center for Digital Journalism looks at how AI can be adapted to journalism. It summarizes a previously off-the-record meeting held back in June by the Tow Center and the Brown Institute for Media Innovation. (Full disclosure: Nieman Lab director Joshua Benton was part of the meeting.)

Among the report’s findings:

AI “helps reporters find and tell stories that were previously out of reach or impractical.” Three areas where AI can be particularly helpful in the newsroom: “Finding needles in haystacks” (discovering things in data that humans can’t, which humans can then fact-check); identifying trends or outliers; and as a subject of a story itself: “Because they are built by humans, algorithms harbor human bias — and by examining them, we can discover previously unseen bias.”

AI can deliver much more personalized news — for good and bad. AI could be used to monitor readers’ likes and dislikes, ultimately shaping stories to people’s individual interests. But, as one participant cautioned:

The first stage of personalization is recommending articles; the long-term impact is filter bubbles. The next step is using NLP [Natural Language Processing] to shape an article to exactly the way you want to read it. Tone, political stance, and many other things. At that point, journalism becomes marketing. We need to be very aware that too much personalization crosses the line into a different activity.

Another concern is that, if articles become too personalized, the public record is at risk: “When everyone sees a different version of a story, there is no authoritative version to cite.”

— AI brings up new ethical considerations. Participants agreed that news organizations need to disclose when AI has been used in creating a story, but “that description must be translated into non-technical terms, and told in a concise manner that lets readers understand how AI was used and how choices were made.” There’s a need for best practices around disclosures.

Also, most AI tools aren’t built specifically with newsrooms (or their editorial values) in mind. One engineer said: “A lot of these questions currently seem impenetrable to us engineers because we don’t understand the editorial values at a deep level, so we can’t model them. Engineers don’t necessarily think of the systems they are building as embodying editorial values, which is an interesting problem. The way a system like this is built does not reflect this underlying goal.”

The full report is here.

Photo of robots by Robert Heim used under a Creative Commons license.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Seeking “innovative,” “stable,” and “interested”: How The Markup and CalMatters matched up
Nonprofit news has seen an uptick in mergers, acquisitions, and other consolidations. CalMatters CEO Neil Chase still says “I don’t think we’ve seen enough yet.”
“Objectivity” in journalism is a tricky concept. What could replace it?
“For a long time, ‘objectivity’ packaged together many important ideas about truth and trust. American journalism has disowned that brand without offering a replacement.”
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.