Prediction
The (semi-) automated explainer gets good
Name
Walter Frick
Excerpt
“With the help of AI, explanatory journalists and the newsrooms they work in will compete on judgment, perspective, process, and personalization.”
Prediction ID
57616c746572-24
 

Though several of this year’s experiments in AI-generated stories ended poorly, next year, several major newsrooms will publish automated or semi-automated explainers — and they’ll be good.

Explanatory journalism involves, among other things, synthesis of existing material, and that’s a skill that modern chatbots like ChatGPT and Claude excel at. They can summarize reports, answer commonly held questions (with citations), and describe key areas of debate and disagreement. That makes it possible to assemble the skeleton of an explainer in seconds. The use of AI in explanatory journalism has been inevitable for a while now and was the focus of my 2016 research as a Knight-Nieman Fellow. Since then, major advances in large language models have made it easier than ever for newsrooms to do this kind of work.

Does that mean explanatory journalists are out of a job? No — at least not the good ones.

Explainers were once mocked in a long-lost tweet as “Everything I know about X that I learned by Googling for an hour.” Funny, and perhaps true of the lowest common denominator in this genre. But in reality it takes tremendous skill and judgment to write a balanced and comprehensive explainer. And that judgment will be even more valuable now, as seasoned explanatory journalists are able to effectively oversee a team of AI research assistants in order to do their work.

With the help of AI, explanatory journalists and the newsrooms they work in will compete on judgment, perspective, process, and personalization. The same essential-if-hard-to-define judgment needed to write a good explainer will be required to assemble and edit one with the help of AI.

As for perspective, if search engines like Google and Bing do go down the road of producing AI-generated explanations, they’ll be under tremendous pressure to offend no one and play everything down the middle. That leaves room for outlets with clear perspectives — political or otherwise — to thrive. Think about the questions and the sources that The Economist would theoretically draw on to produce an automated explainer, compared to those that The Guardian would. A publication’s distinct perspective will continue to shape the journalism it creates, even when it is partially generated by AI.

Great explanatory outlets also have a chance to turn their processes into a competitive advantage. They will design their own proprietary strings of prompts that elicit different types of information from their competitors. For example, a publisher’s internal guide on how to cover academic research could become the basis for novel prompts. Ditto for fact checking. Others will craft new processes inspired by things like scenario planning or red teaming.

The safest place to start is the use of AI to assemble background material that is still organized and edited by a human to produce a single piece of content. Eventually, though, personalization will be irresistibly useful. I want an explainer that answers my questions and speaks to my level of knowledge. Maybe someday an out-of-the-box AI sold by a tech company will be able to do that. But, for now, I want that experience created and overseen by a publisher I trust.

Walter Frick is chief editor at the Atlantic Council’s GeoEconomics Center and writer of Nonrival.

Though several of this year’s experiments in AI-generated stories ended poorly, next year, several major newsrooms will publish automated or semi-automated explainers — and they’ll be good.

Explanatory journalism involves, among other things, synthesis of existing material, and that’s a skill that modern chatbots like ChatGPT and Claude excel at. They can summarize reports, answer commonly held questions (with citations), and describe key areas of debate and disagreement. That makes it possible to assemble the skeleton of an explainer in seconds. The use of AI in explanatory journalism has been inevitable for a while now and was the focus of my 2016 research as a Knight-Nieman Fellow. Since then, major advances in large language models have made it easier than ever for newsrooms to do this kind of work.

Does that mean explanatory journalists are out of a job? No — at least not the good ones.

Explainers were once mocked in a long-lost tweet as “Everything I know about X that I learned by Googling for an hour.” Funny, and perhaps true of the lowest common denominator in this genre. But in reality it takes tremendous skill and judgment to write a balanced and comprehensive explainer. And that judgment will be even more valuable now, as seasoned explanatory journalists are able to effectively oversee a team of AI research assistants in order to do their work.

With the help of AI, explanatory journalists and the newsrooms they work in will compete on judgment, perspective, process, and personalization. The same essential-if-hard-to-define judgment needed to write a good explainer will be required to assemble and edit one with the help of AI.

As for perspective, if search engines like Google and Bing do go down the road of producing AI-generated explanations, they’ll be under tremendous pressure to offend no one and play everything down the middle. That leaves room for outlets with clear perspectives — political or otherwise — to thrive. Think about the questions and the sources that The Economist would theoretically draw on to produce an automated explainer, compared to those that The Guardian would. A publication’s distinct perspective will continue to shape the journalism it creates, even when it is partially generated by AI.

Great explanatory outlets also have a chance to turn their processes into a competitive advantage. They will design their own proprietary strings of prompts that elicit different types of information from their competitors. For example, a publisher’s internal guide on how to cover academic research could become the basis for novel prompts. Ditto for fact checking. Others will craft new processes inspired by things like scenario planning or red teaming.

The safest place to start is the use of AI to assemble background material that is still organized and edited by a human to produce a single piece of content. Eventually, though, personalization will be irresistibly useful. I want an explainer that answers my questions and speaks to my level of knowledge. Maybe someday an out-of-the-box AI sold by a tech company will be able to do that. But, for now, I want that experience created and overseen by a publisher I trust.

Walter Frick is chief editor at the Atlantic Council’s GeoEconomics Center and writer of Nonrival.