Nieman Foundation at Harvard
HOME
          
LATEST STORY
What journalists and independent creators can learn from each other
ABOUT                    SUBSCRIBE
Feb. 27, 2023, 12:45 p.m.
Reporting & Production

Meet the first-ever artificial intelligence editor at the Financial Times

“I want to make AI more understandable and accessible to our readers, so it doesn’t feel like magic but merely a tool that they can wield.”

As some newsroom roles go the way of the dinosaurs, brand new jobs are being born. This interview is part of an occasional series of Q&As with people who are the first to hold their title in their newsroom. Read through the rest here.

Madhumita Murgia describes herself as an accidental tech journalist. As a biology student, Murgia studied non-human intelligence in a gray parrot named Alex before she ever focused on intelligence of the artificial variety.

Now, as the Financial Times’ first-ever artificial intelligence editor, Murgia has been tasked with leading coverage on the rapidly evolving field and providing advice and expertise to other FT reporters as they “increasingly encounter stories about how AI is upending industries around the world.” In the newly created role, she’s being asked to sort the hype from the truly transformative in an industry characterized by both.

In recent weeks, Murgia has written about a science fiction magazine that had to stop accepting submissions after being flooded by hundreds of stories generated with the help of AI, China racing to catch up to ChatGPT, and the Vatican hosting a summit to address “the moral conundrums of AI.” (“A rabbi, imam, and the Pope walked into a room …”)

When not covering AI for the FT, Murgia is finishing her first book, Code-Dependent, out in February 2024. We caught up via email. Our back-and-forth has been lightly edited for clarity and that British proclivity for the letter “zed.”

Sarah Scire: How “first” is this position? It’s the first time that someone has held the title of “artificial intelligence editor” in your newsroom, correct? Have you seen other newsrooms create similar positions?

Murgia: It’s a first first! We haven’t had this title, or even a job devoted to AI before at the FT. I had sort of carved it into my beat alongside data and privacy over the last four or five years and focused on areas that impacted society like facial recognition, AI ethics, and cutting-edge applications in healthcare or science. Our innovation editor John Thornhill and West Coast editor Richard Waters often wrote about AI as part of their wider remits, too. But it wasn’t anyone’s primary responsibility.

In recent months, other newsrooms have appointed AI reporters/correspondents to take on this quickly evolving beat, and of course, there are many great reporters who have been writing about AI for a while, such as Karen Hao when she was at MIT Tech Review, and others. What I think is unique about this role at the FT is that it operates within a global newsroom. Correspondents collaborate closely across disciplines and countries — so I hope we can take advantage of that as we build out our coverage.

Scire: What is your job as AI editor? Can you describe, in particular, how you’re thinking about the “global remit” you mentioned in the announcement?

Murgia: The job is to break news and dive deep into how AI technologies work, how they’ll be applied across industries, and the ripple effects on business and society. I’m particularly interested in the impact of AI technologies on our daily lives, for better and worse. It’s a unique role in that I get to report and write, but also work with colleagues to shape stories in their areas of interest. Over the past six years, I’ve collaborated with reporters from the U.S., Brussels, and Berlin, to Kenya, China, and India — it’s something I love about working at the FT.

As AI technologies are adopted more broadly, in the same way that digitization or cloud computing was, correspondents in our bureaus across the world will start to encounter it in their beats. I’ve already heard from several colleagues in beats like media or education about AI-focused stories they’re interested in. With this global remit, I’m hoping we can tie together different threads and trends, and leverage our international perspective to get a sense of how AI is evolving and being adopted at scale.

Scire: What did covering AI look like in your newsrooms before this role was created? (And how will that change, now that you’ve taken this title of AI editor?)

Murgia: We aren’t new to covering AI — there are a handful of journalists at the FT who have understood AI well and written about it for a few years now. We were (hopefully) rigorous in our coverage, but perhaps not singularly focused or strategic about it. For instance, I became interested in biometric technologies such as facial recognition in 2018, and spent a while digging into where and how it was being used and the backlash against its rollout — but this was purely driven by interest, and not a larger plan.

Now, we are in a moment where our readers are curious and hungry to learn more about how this set of technologies works and its impact on the workforce. We’ll approach it from this macro angle. I’ve also always taken an interest in the broader societal impacts of AI, including its ethical use and its role in advancing science and healthcare, which I hope we will focus on. We want our coverage to inform, and also to reveal the opportunities, challenges, and pitfalls of AI in the real world.

Scire: You will be covering artificial intelligence as many industries — including journalism! — are trying to learn how it’ll impact their work and business. This is a little meta, but do you foresee AI changing the way you report, write, or publish?

Murgia: It’s been interesting to me how many media organizations and insiders are concerned about this question right now. It’s exacerbated, I think, by the public examples of publishers experimenting with generative AI. So far I haven’t found that these new tools have changed the way I report or write. Good journalism, in my view, is original and reveals previously unknown or hidden truths. Language models work by predicting the most likely next word in a sequence, based on existing text they’ve been trained on. So they cannot ultimately produce or uncover anything truly new or unexpected in their current form.

I can see how it might be useful in future, as it becomes more accurate, in gathering basic information quickly, outlining themes, and experimenting with summaries [and] headlines. Perhaps chatbots will be a new way to interface with audiences, to provide tailored content and engage with a reader, based on an organization’s own content. I’ll certainly be looking for creative examples of how it’s being tested out today.

Scire: How are you thinking about disclosures, if any? If the Financial Times begins to use a particular AI-powered tool, for example, do you anticipate mentioning that within your coverage?

Murgia: I don’t know of any plans to use AI tools at the FT just now, but I assume the leadership is following developments in generative AI closely, like many other media organizations will be. If we did use these tools, though, I’d expect it would be disclosed transparently to our readers, just as all human authors are credited.

Scire: What kinds of previous experience — personal, professional, educational, etc. — led you to this job, specifically?

Murgia: My educational background was in biology — where I focused on neuroscience and disease — and later in clinical immunology. One of my final pieces of work as an undergraduate was an analysis of intelligence in non-human animals, where I focused on an African gray parrot called Alex and its ability to form concepts.

I was an accidental technology journalist, but what I loved about it was breaking down and communicating complexity to a wider audience. I was drawn, in particular, to subjects at the intersection of tech, science, and society. Early on in my career, I investigated how my own personal data was used (and abused) to build digital products, which turned into a years-long rabbit hole, and travelled to Seoul to witness a human being beaten by an AI at the game of Go. I think this job is the nexus of all these fascinations over the years.

Scire: What do you see as some of the challenges and opportunities for being the first AI editor — or the first anything — at a news organization? Are there certain groups, people, or resources that you’ll look to, outside of your own newsroom, as you do this work?

Murgia: The great thing about being a first is that you have some space to figure things out and shape your own path, without having anything to contrast with. A big opportunity here is for us to own a story that intersects with all the things FT readers care about — business, the economy, and the evolution of society. And it’s also a chance for us to help our audience visualize what the future could look like.

The challenge, I think, is communicating the complicated underlying technology in a way that is accessible, but also accurate and nuanced. We don’t want to hype things unnecessarily, or play down the impacts. I’ll certainly look to the scientists, engineers, and ethicists who work in this space to help elucidate the nuances. I want particularly to find women who are experts across these areas, who I find always give me a fresh perspective. I’m keen to also speak to people who are impacted by AI — business owners, governments, ordinary citizens — to explore new angles of the story.

Scire: And what about your hopes and dreams for this new role?

Murgia: My hopes and dreams! Thank you for asking. I want to make AI more understandable and accessible to our readers, so it doesn’t feel like magic but merely a tool that they can wield. I want to report from the frontiers of AI development on how it is changing the way we work and live, and to forecast risks and challenges early on. I want to tell great stories that people will remember.

Scire: I appreciate that — trying to demystify or help readers feel it’s not just “magic.” What do you think about this criticism from some quarters that some news coverage is anthropomorphizing AI? I feel like this is coming up, in particular, when people are writing about unsettling conversations with chatbots. Is that something that journalists covering AI should be wary of doing?

Murgia: I think it’s really difficult not to anthropomorphize — I struggle with this too — because it’s a very evocative way to explain it to audiences. But I do think we should strive to describe it as a tool, rather than as a “brain” or a companion of some kind. Otherwise, it opens up the risk that consumers interacting with these systems will have certain expectations of them, or infer things that aren’t possible for these systems to do, like understand or feel.

Separately, however, I don’t think we should dismiss the very real impact that these systems do have on our behaviors and psyche, including people projecting human emotions onto chatbots. We’ve seen this happen already. It matters that the technology can fool regular people into believing there is intelligence or sentience behind it, and we should be writing about the risks and guardrails being built in that context.

Scire: Any other advice you’d give journalists covering AI? Maybe particularly for those who might be covering it for the first time in 2023?

Murgia: I’d say take the time to speak to practitioners [and] researchers who can break down and explain concepts in artificial intelligence, as it’s essential to writing well about its applications. As I’ve said above, we should strive to treat it as a tool — an imperfect one at that — in our coverage, and question all claims that sound outlandish. Really, the same skills you’d use for all types of explanatory journalism!

Sarah Scire is deputy editor of Nieman Lab. You can reach her via email (sarah_scire@harvard.edu), Twitter DM (@SarahScire), or Signal (+1 617-299-1821).
POSTED     Feb. 27, 2023, 12:45 p.m.
SEE MORE ON Reporting & Production
PART OF A SERIES     The First Ever
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”
What it takes to run a metro newspaper in the digital era, according to four top editors
“People will pay you to make their lives easier, even when it comes to telling them which burrito to eat.”