Nieman Foundation at Harvard
HOME
          
LATEST STORY
Journalism scholars want to make journalism better. They’re not quite sure how.
ABOUT                    SUBSCRIBE
July 20, 2023, 1:06 p.m.

Google wants you to let its AI bot help you write news articles

Dystopian, yes — but tools don’t have to be perfect to be useful to journalists.

Every other semester, I teach a class called Media Criticism at Boston University’s College of Communication. (I took it over from the late great David Carr after his untimely death in 2015.) One of my in-class assignments goes something like this:

You are a rookie reporter at the Daily Gazette, working the evening shift. This afternoon, there was a shooting in a rundown neighborhood south of downtown, leaving a teenager dead. You have two documents: the official police report of the incident and some notes gathered by your colleague Dave, a dayside reporter who went to the scene, did some interviews, and wrote down some observations. Using only these two documents, your job is to write up the shooting for tomorrow’s paper — aim for 300 words.

I break the class into small groups and each is tasked with writing their own version of the story. The two documents are loaded with potential landmines meant to force journalistic decisions on deadline. The suspect’s mugshot clearly shows someone who has been physically beaten, but the police report makes no mention of any “resistance” to his arrest. Do you mention that? Does that make you more or less likely to want to run the mugshot with the story? The police stopped the suspect because he fit an eyewitness description: a black male, approximately 16-24 years of age, between 5’8″ and 6’0″ tall, slender, wearing blue jeans, and a dark-colored jacket. Not the narrowest of descriptions! An eyewitness told Dayside Dave that the suspect’s older brother is already in prison on an assault charge and gives you a good quote about how this will break their mom’s heart. Do you mention that or use the quote?

The local city councilman uses the occasion to push a more-cops proposal he has before council — do you let him do that in your story? A police officer who runs the department’s gang task force says “These are gangbangers, absolutely, no doubt,” and that they were wearing the colors of two gangs. Is that enough for you to describe this as a “what police describe as a gang-related shooting”? At some point, students get a surprise third “document”: how the three local TV stations covered the shooting on the 6 p.m. news. Does their if-it-bleeds-it-leads coverage influence your story? Is your lede a spare statement of facts or a big windup about how it’s “only the latest tragedy to hit this struggling neighborhood”? Oh, and the suspect is 17 years old, legally a minor — do you name him?

Some of the calls are pretty clear-cut; some could spark good arguments for either side. But that’s the point — to have those arguments, on a tight deadline. When we go over the resulting stories, it’s instructive for each group to see the landmine they’d overlooked, or the decision that was easy for them but difficult for others.

Last December, for the first time, I also put those documents to another reporter — ChatGPT, which had been released to the public a few days before. It made its way through all those journalistic debates in just a few seconds. The story that resulted was purple-prosed in places (like calling the shooting “tragic” three times) and a little racist (like gratuitously mentioning “several black men” who were allegedly standing nearby). It felt obliged to use something from each person the dayside reporter had interviewed, even when they were repetitive or ethically questionable; it called both victim and suspect gang members without hesitation. What ChatGPT produced would have been a pretty bad crime story, but to be honest, I’ve read worse ones written by humans.

Late last night, The New York Times’ Ben Mullin and Nico Grant published a story with this alarming headline: “Google Tests A.I. Tool That Is Able to Write News Articles.”

Google is testing a product that uses artificial intelligence technology to produce news stories, pitching it to news organizations including The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp, according to three people familiar with the matter.

The tool, known internally by the working title Genesis, can take in information — details of current events, for example — and generate news content, the people said, speaking on the condition of anonymity to discuss the product.

One of the three people familiar with the product said that Google believed it could serve as a kind of personal assistant for journalists, automating some tasks to free up time for others, and that the company saw it as responsible technology that could help steer the publishing industry away from the pitfalls of generative A.I.

Some executives who saw Google’s pitch described it as unsettling, asking not to be identified discussing a confidential matter. Two people said it seemed to take for granted the effort that went into producing accurate and artful news stories.

Reaction around the news business was, shall we say, not positive:

I have not seen “Genesis” (Peter Gabriel era or Phil Collins era?) and know nothing about it beyond this Times story. And if the pitch here is “press a button and we’ll auto-publish an AI-generated story on your website,” sure, that’d be crazy in the vast majority of cases. But I don’t think that’s what the Times story is describing, and I don’t think we should be so quick to write off the idea of AI assistance in journalism.

Remember, Genesis “can take in information — details of current events, for example — and generate news content.” The “information” there is the reporting piece. In my Daily Gazette example, it’s the police report and the eyewitness notes the dayside reporter gathered, plus perhaps some other information taken from the Gazette’s digital archives. The role of Genesis seems to be distilling raw reporting into a news narrative.

Will it be able to do that well enough to publish the results? I doubt it, in most cases. Remember we already have thousands of perfectly good stories being autopublished by AI every day — but only in narrow areas where the important source data is highly structured. If you have a baseball game’s box score, an AI can write a perfectly good game story, because the box score contains every important event in the gameplay — every hit, every pitching change, every strikeout, every ninth-inning comeback. Will AI baseball stories sometimes miss something important that isn’t in the box score — like the bench-clearing shoving match that came after one too many high-and-tight fastballs in the 4th? Sure. But it won’t get anything wrong if fed the correct data. The same is true of corporate earnings reports, which are also based on highly structured documents. But the number of news stories reliant solely on structured data is pretty small, and the vast gray area that surrounds them still require human hands.

But as a personal tool to get quickly to something like Anne Lamott’s “shitty first drafts”? Sure, that could be useful. I can think of lots of times I’d be interested in seeing an instant first draft of something. Let’s imagine you’re writing up a new study that’s just released. You’ve done phone interviews with the study’s authors and a couple outside experts. Thanks to AI tools, you’ve got complete transcripts of those interviews. I can absolutely understand the appeal of dumping the study and those transcripts into an AI and having it bang out a draft.

Would I publish that draft? No way. But could it remind me of a great quote that I’d forgotten about, or bring up an angle I hadn’t planned on emphasizing? Sure. If I asked it to look through my notes and write 10 potential ledes, could one of them spark a great idea in my head? Sure. Could it maybe sometimes be good enough to form the basis of the story I want to write? Maybe once in a while, though I doubt often.

There are obviously stories where using such a tool would make little sense. But I don’t think it makes sense to reject a potentially helpful tool out of hand. Again, I don’t know how useful Genesis (or whatever comes of it) will be in the real world. But there are a million electronic tools journalists use all the time that make our work better while leaving final publication in human hands. When word processors arrived in the 1980s, some authors claimed they would be the ruination of writing — encouraging endless fiddling with text, incentivizing breakneck productivity, and dehumanizing the transmission of language. Spell-checkers took part of a copy editor’s job and shoved it in a CPU; Grammarly and similar tools do the same today. Digitized archives were going to degrade the researcher’s unique skill to find the critical document on a dusty shelf. Or just imagine what it would be like to research a complicated topic today without a search engine to organize the world’s information. None of these tools has been perfect, and there were good things about each of their pre-digital analogs. But it would have done us no good to flat-out reject them.

And let’s be frank: News production is now and always will be a tiny share of overall text production, and generative AI tools are coming for that market. There are a gazillion AI startups aimed at the memo-writing, social-post-publishing, marketing-copy-producing elements of 21st-century information work. The productivity-software big dogs like Google and Microsoft are betting big on AI tools for non-news writing. Maybe it’s the future, maybe it’s not — but either way, you can’t expect the news industry to somehow opt out of what’s happening to every other paragraph-producing sector. As our own Sophie Culpepper showed recently, AI is already performing news-like tasks — and if journalists decide not to use these new tools, other people most certainly will.

There are two major sets of reasons for journalists to be skeptical of AI. The first is about quality. (AI makes mistakes, and it can offer a false sense of accuracy.) The second is about our own labor. (Every new thing an AI can do is something that human journalists soon won’t be needed for.) Both of those are real concerns — but it’s important, I think, to avoid conflating the two. That a tool doesn’t produce perfect news stories doesn’t mean it can’t be useful to a reporter.

Joshua Benton is the senior writer and former director of Nieman Lab. You can reach him via email (joshua_benton@harvard.edu) or Twitter DM (@jbenton).
POSTED     July 20, 2023, 1:06 p.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Journalism scholars want to make journalism better. They’re not quite sure how.
Does any of this work actually matter?
Congress fights to keep AM radio in cars
The AM Radio for Every Vehicle Act is being deliberated in both houses of Congress.
Going back to the well: CNN.com, the most popular news site in the U.S., is putting up a paywall
It has a much better chance of success than CNN+ ever did. But it still has to convince people its work is distinctive enough to break out the credit card.