Prediction
“AI” discourse as misinformation
Name
Jonas Kaiser
Excerpt
“The ‘AI’ that dominates our collective imagination differs significantly from the technology at our fingertips. For most of us, this imagined technology is based more on the fiction of the last decades than reality. Journalism’s task, then, should be to bridge that gap.”
Prediction ID
4a6f6e617320-24
 

Talk of artificial intelligence, or “AI,” is seemingly everywhere these days. You may even be tired of seeing another piece on “AI.” In articles, “AI” is linked to diverse topics such as climate change, education, warfare, or journalism. Meanwhile, products are increasingly advertised as being “charged by AI,” and even the stock photos in those dubious ads at the bottom of news websites are now commonly replaced with images generated by “AI.” This sheer omnipresence of a technology in public discourse seems to fit its revolutionary character or at least its revolutionary promise. However, the “AI” that dominates our collective imagination differs significantly from the technology at our fingertips. For most of us, this imagined technology is based more on the fiction of the last decades than reality. Journalism’s task, then, should be to bridge that gap.

However, when journalists write about artificial intelligence, they will more often than not refer to “AI” (see Figure 1), thus conflating artificial general intelligence (AGI) and generative AI. While both invoke “AI,” they have significant differences. Differences that journalists all too often seem to ignore. But by avoiding the complexities of “AI” journalists are doing — at best — a disservice to their readers and — at worst — are spreading misinformation.

Figure 1. Coverage of AI types in U.S. media by month. Source: Mediacloud, U.S. National database. Time frame: November 2020-November 2023.

Artificial general intelligence, or AGI, broadly describes an “AI system that is at least as capable as a human at most tasks.” But even the definition of AGI is a topic of controversy. For example, OpenAI’s definition of AGI not only differs from Google’s but also has direct consequences on its bottom line. And while AGI is still only theoretical, generative AI systems like ChatGPT are a reality and very different from AGI. Due to their reliance on training data and lack of intelligence, generative AI systems also earned a different nickname: stochastic parrots. And while others dispute this characterization, it needs to be noted that there is a big difference between a “stochastic parrot” and AGI. Evoking “AI” in the context of generative AI, however, ignores that difference.

You may be asking: Isn’t that always the case with new technologies? Isn’t there always a difference between what the technology is and what people think it is? And sure, there are parallels, to some extent. But, for example, people did not use the term “blockchain” before the technology was invented and popularized the way that “AI” has existed within popular media for many years before being widely deployed to the masses/public. We have decades of pop culture that depicts “AI,” and this of course impacts how we think of the technology. Put differently, just because we now have the technology that we can classify in one way or the other as “AI,” that does not erase the imagined technology in people’s minds. Thus it is maybe not too surprising that only 18% of Americans have used large language models like ChatGPT, but over 50% are concerned about “AI.”

Figure 1 shows that U.S. journalists typically default to the generic “AI,” when covering the topic, which doesn’t reflect the nuances of our actual technological landscape, where AGI is not yet a reality. This lack of specificity not only leaves readers grappling with their preconceived notions of “AI,” which often veer toward AGI, but also enables tech companies to define “AI” to their benefit.

Journalistic coverage of “AI” that does not refer specifically to generative AI or AGI but simply “AI” spreads misinformation. It risks evoking the specter of AGI for its audience while covering Generative AI. This occurs in articles that give agency to “AI” where there is none, as well as in pieces that equate AGI and generative AI under the umbrella term “AI,” when there is, in fact, a massive difference. The victim of this coverage is the audience, which remains both informed and misinformed at the same time, struggling to fit the new information within its imagined conception of “AI.” And against this background, stories like the lawyer who filed a brief written with ChatGPT, are maybe not that much of a surprise.

The solution to this issue is simple: journalistic precision. Saying “generative AI” or “artificial general intelligence” is more unwieldy and complicated than saying “AI,” and it might not get the same clicks. But it’s the difference between talking about a chatbot that often gets standardized tests right and numbers wrong and, you know, Skynet.

Jonas Kaiser is a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University and an assistant professor at Suffolk University.

Talk of artificial intelligence, or “AI,” is seemingly everywhere these days. You may even be tired of seeing another piece on “AI.” In articles, “AI” is linked to diverse topics such as climate change, education, warfare, or journalism. Meanwhile, products are increasingly advertised as being “charged by AI,” and even the stock photos in those dubious ads at the bottom of news websites are now commonly replaced with images generated by “AI.” This sheer omnipresence of a technology in public discourse seems to fit its revolutionary character or at least its revolutionary promise. However, the “AI” that dominates our collective imagination differs significantly from the technology at our fingertips. For most of us, this imagined technology is based more on the fiction of the last decades than reality. Journalism’s task, then, should be to bridge that gap.

However, when journalists write about artificial intelligence, they will more often than not refer to “AI” (see Figure 1), thus conflating artificial general intelligence (AGI) and generative AI. While both invoke “AI,” they have significant differences. Differences that journalists all too often seem to ignore. But by avoiding the complexities of “AI” journalists are doing — at best — a disservice to their readers and — at worst — are spreading misinformation.

Figure 1. Coverage of AI types in U.S. media by month. Source: Mediacloud, U.S. National database. Time frame: November 2020-November 2023.

Artificial general intelligence, or AGI, broadly describes an “AI system that is at least as capable as a human at most tasks.” But even the definition of AGI is a topic of controversy. For example, OpenAI’s definition of AGI not only differs from Google’s but also has direct consequences on its bottom line. And while AGI is still only theoretical, generative AI systems like ChatGPT are a reality and very different from AGI. Due to their reliance on training data and lack of intelligence, generative AI systems also earned a different nickname: stochastic parrots. And while others dispute this characterization, it needs to be noted that there is a big difference between a “stochastic parrot” and AGI. Evoking “AI” in the context of generative AI, however, ignores that difference.

You may be asking: Isn’t that always the case with new technologies? Isn’t there always a difference between what the technology is and what people think it is? And sure, there are parallels, to some extent. But, for example, people did not use the term “blockchain” before the technology was invented and popularized the way that “AI” has existed within popular media for many years before being widely deployed to the masses/public. We have decades of pop culture that depicts “AI,” and this of course impacts how we think of the technology. Put differently, just because we now have the technology that we can classify in one way or the other as “AI,” that does not erase the imagined technology in people’s minds. Thus it is maybe not too surprising that only 18% of Americans have used large language models like ChatGPT, but over 50% are concerned about “AI.”

Figure 1 shows that U.S. journalists typically default to the generic “AI,” when covering the topic, which doesn’t reflect the nuances of our actual technological landscape, where AGI is not yet a reality. This lack of specificity not only leaves readers grappling with their preconceived notions of “AI,” which often veer toward AGI, but also enables tech companies to define “AI” to their benefit.

Journalistic coverage of “AI” that does not refer specifically to generative AI or AGI but simply “AI” spreads misinformation. It risks evoking the specter of AGI for its audience while covering Generative AI. This occurs in articles that give agency to “AI” where there is none, as well as in pieces that equate AGI and generative AI under the umbrella term “AI,” when there is, in fact, a massive difference. The victim of this coverage is the audience, which remains both informed and misinformed at the same time, struggling to fit the new information within its imagined conception of “AI.” And against this background, stories like the lawyer who filed a brief written with ChatGPT, are maybe not that much of a surprise.

The solution to this issue is simple: journalistic precision. Saying “generative AI” or “artificial general intelligence” is more unwieldy and complicated than saying “AI,” and it might not get the same clicks. But it’s the difference between talking about a chatbot that often gets standardized tests right and numbers wrong and, you know, Skynet.

Jonas Kaiser is a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University and an assistant professor at Suffolk University.