Nieman Foundation at Harvard
HOME
          
LATEST STORY
BREAKING: The ways people hear about big news these days; “into a million pieces,” says source
ABOUT                    SUBSCRIBE
Feb. 12, 2024, 11:39 a.m.
LINK: towcenter.columbia.edu  ➚   |   Posted by: Sarah Scire   |   February 12, 2024

Uncertainty in the news industry, hype around AI, and hope for better business models and new revenue streams have all helped to drive news organizations to adopt AI technology, a new report from the Tow Center of Digital Journalism at Columbia University finds.

Felix Simon, a researcher and doctoral student at the Oxford Internet Institute, interviewed more than 130 journalists and news executives from 35 outlets in the United States, United Kingdom, and Germany. His findings suggest that as AI-powered search engines gain prominence and more newsrooms embrace the technology internally, “a familiar power imbalance” is emerging between news publishers and tech companies.

Newsrooms have become more open to AI not just because the technology has improved and become more publicly accessible, but also because it’s become acceptable and widely used in other industries. (“News organizations often anxiously watch their competitors, plagued by concerns that their own innovations have historically lagged behind those of their peers,” Simon noted.) Hard times in the journalism business have also pushed some news organizations to experiment with AI.

“I think one of the truths about the media industry is that it is an industry that is under a certain obvious strain for cash, for new business models, figuring out what their future is,” one Germany-based audience editor told Simon. “Basically this ‘What’s going to save us?’ question is all out there.”

If outlets are being pushed by news industry dynamics and market pressures, they’re also being pulled by promises of increased efficiency.

As one U.S.-based news executive put it: “The strategic question is: With the limited amount of time and resources, how could we make the most use of our journalistic talent?”

AI-generated news articles with incorrect information, made-up links, and baldly bad writing have grabbed headlines. But that’s far from the most common way newsrooms use AI. Nearly all of the interviewees reported using AI to help with transcription. (Many of us here at Nieman Lab use the AI-powered Otter.) Dynamic paywalls that use data points to predict how likely it is a certain user will pay for a subscription — and alter when (and if) that user sees a paywall based on those predictions — have been around for years.

News organizations have also been using AI to automate some types of coverage and make sense of large document sets. (AI’s ability to do “fuzzy matching” — identifying similar but not necessarily identical elements — has been a boon for corruption and tax evasion investigations, one journalist said.) But the “ability to deliver short-term efficiencies may currently be outweighed by their potential to cause longer-term reputational damage,” Simon found, echoing findings in a separate study he co-authored. Given the tendency of large language models to hallucinate or otherwise misstep, several news outlets said they’ve implemented editorial handbrakes to carefully fact-check any generative AI output.

“I tried it a lot and, well, checking sometimes takes longer than writing a summary myself,” one interviewee said. An editor in the U.K. said, rather more bluntly, “I don’t want to be BuzzFeed or CNET, just putting out sort of, you know, junk.”

It’s not only AI-generated text and news articles that require manual checking and rechecking. Many in the news industry expressed concern and frustration that prominent AI companies have refused to share enough information about the data and human labor used to build their models, or hidden rules or parameters that can affect output. (Media researchers studying podcasts recently ran into problems before realizing the OpenAI model they were using was rejecting episode descriptions that even obliquely mentioned sex or crime: “It took some time to figure out what was happening and how to work around it.”)

Investigations from outlets like The Markup and Rest of World have already surfaced some of the biases buried in AI systems. Without transparency, the “black box” of some AI tools can slow down journalistic work considerably.

One journalist compared one AI tool he uses to “an unreliable calculator,” telling Simon: “I think my main concern is: Is the tool missing something, is it a bad tool, is it misinterpreting what I want? And I think if you keep not finding stuff that you expect to, you can do, you know, more manual tests.”

Many interviewees expressed concern that AI provided by external companies could “undercut their autonomy” as journalists in ways that could be, worryingly, “unforeseeable” due to this lack of transparency. Those included “limiting discretionary decision-making abilities” and “structuring their view of what is newsworthy in ways that make it hard for them to think about counterfactuals or alternatives, or by introducing bias into their output.”

“I don’t know what it was trained with and what bias it might have,” one reporter said. “And that of course has an influence on what kind of story I might tell.”

The journalists interviewed reporting using AI tools provided by companies like Google, Amazon, and Microsoft to automate various tasks. Simon identified a “familiar power imbalance between platforms and publishers” in which news organizations are, again, not the ones with the upper hand. As news publishers have discovered during a tumultuous relationship with Facebook, platforms don’t exactly always have the news industry’s best interests at heart and becoming dependent on one for revenue and traffic referrals can lead to major heartache. Now, instead of Meta funding new initiatives and making soothing noises about the importance of local journalism, it’s Microsoft that is announcing new journalism partnerships and OpenAI the one to be striking major licensing deals with news publishers.

When it comes to AI technology, specifically, the Tow Center report notes “the platforms not only get to determine the overall conditions of use, they also have control over more granular terms, such as the extent to which they permit publishers to customize AI applications built on top of their technology — a dynamic that could end up restricting the tools or systems publishers can build.”

Interviewees pointed to Graphiq as a cautionary tale. The infographics tool was used by the Associated Press, the LA Times, Reuters, and others but ended its service for newsrooms a month after announcing it was being acquired by Amazon in 2017. “Graphiq paid lip service to the news industry in a statement (‘We greatly enjoyed working with publishers over the last few years to help them tell the news and look forward to continuing to use our technology in other exciting areas’), but publishers who had been using it were left hanging,” Simon noted.

Smaller outlets with fewer resources are more likely to end up dependent. Bigger and better-off news organizations interviewed were more likely to have some in-house AI tools, Simon found. “My findings suggest that the high cost of custom AI development puts it out of reach for all but the best-resourced news organizations,” he wrote. “For everyone else, the most viable solutions are third-party offerings from platform companies and the like.”

Some see an existential threat for journalism. The rise of AI-powered chatbots and search engines that provide answers summarized from reporting rather than sending users to news sites via the familiar blue links has some worried that AI might not take down just Superb Owl content, but entire news organizations.

A newsroom manager, based in Germany, reported that two-thirds of his outlet’s online audience comes from search, with 90% of that share from Google. “That’s a big risk for us, if clicks to our content become optional because Google has decided to go all-in on AI-enhanced search where users just get short answers,” he told Simon.

“Why would people still come to our website and read a story if they can get something [via AI-enhanced search] that is tailored to their interests?” another journalist, based in the United States, asked. “Something that’s short and doesn’t mean they have to make another click?”

The implications of AI-powered search, Simon writes, “will depend on strategic choices made by a set of powerful actors over whom the news industry has little control, but whose decisions could have severe ramifications for publishers — both in terms of their financial position and in their ability to reach audiences.” (Comforting, right?)

Ultimately, Simon argues, AI represents more of a “retooling” of the news. “It does not impact the fundamental need to access and gather information, to process it into ‘news,’ to reach existing and new audiences, and to make money,” he writes in the report’s conclusion.

Some newsroom mangers acknowledged to Simon — “despite public proclamations to the contrary” — that AI may take some jobs in journalism. Most seemed to agree, however, that original reporting and newsgathering appears beyond the technology’s reach for now.

“The job of journalism is to find stuff not on the internet already,” one editor said. “Artificial intelligence won’t be able to do that.”

You can read the full report here.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
BREAKING: The ways people hear about big news these days; “into a million pieces,” says source
The New York Times and the Washington Post compete with meme accounts for the chance to be first with a big headline.
In 1924, a magazine ran a contest: “Who is to pay for broadcasting and how?” A century later, we’re still asking the same question
Radio Broadcast received close to a thousand entries to its contest — but ultimately rejected them all.
You’re more likely to believe fake news shared by someone you barely know than by your best friend
“The strength of weak ties” applies to misinformation, too.