Nieman Foundation at Harvard
HOME
          
LATEST STORY
Dow Jones negotiates AI usage agreements with nearly 4,000 news publishers
ABOUT                    SUBSCRIBE
March 19, 2018, 9 a.m.
Reporting & Production

How digital leaders from the BBC and Al Jazeera are planning for the ethics of AI

“We have to remember, as media, we are gatekeepers to people’s understanding of the modern world.”

— If robot reporters are going to deploy from drones in war zones in the future, at what point do we have the conversation about the journalism ethics of all this?

The robots may still be a few years away, but the conversation is happening now (at least about today’s AI technology in newsrooms). At Al Jazeera’s Future of Media Leaders’ Summit earlier this month, a group of experts in areas from media to machine learning discussed how their organizations frame the ethics behind (and in front of!) artificial intelligence.

Ethical AI was one of several topics explored during the gathering in Qatar, focused on data security, the cloud, and how artificial intelligence can automate and augment journalism. (“Data has become more valuable than oil,” Mohamed Abuagla told the audience in the same presentation as the drone-reporter concept.)

AI has already been seeded into the media industry, from surfacing trends for story production to moderating comments. Robotic combat correspondents may still be a far-fetched idea. But with machine learning strengthening algorithms day by day and hour by hour, AI innovations are occurring at a breakneck pace. Machines are more efficient than humans, sure. But in a human-centric field like journalism, how are newsrooms putting AI ethics into practice?

Ali Shah, the BBC’s head of emerging technology and strategic direction, explained his approach to the moral code of AI in journalism. Yaser Bishr, Al Jazeera Media Network’s executive director of digital, also shared some of his thinking on the future of AI in journalism. Here are some of the takeaways:

Ali Shah, the BBC

In both his keynote speech and subsequent panel participation, Shah walked the audience through the business and user implications of infusing AI into parts of the BBC’s production processes. He continued returning to the question of individual agency. “Every time we’re making a judgment about when to apply [machine learning]…what we’re really doing is making a judgment about human capacity,” he said. “Was it right for me to automate that process? When I’m talking about augmenting someone’s role, what judgment values am I augmenting?”

Shah illustrated how the BBC has used AI to perfect camera angles and cuts when filming, search for quotes in recorded data more speedily, and make recommendations for further viewing when the credits are rolling on the BBC’s online player. (The BBC and Microsoft have also experimented with a voice interface AI.) But he emphasized how those AI tools are intended to automate, augment, and amplify human journalists’ work, not necessarily replace or supersede them. “Machine learning is not going to be the answer to every single problem that we face,” he said.

The BBC is proud to be one of the world’s most trusted news brands, and Shah pointed to the need for balance between trust in the organization and individual agency. “We’re going to have to strike a balance between the utility and the effectiveness and the role it plays in society and in our business,” he said. “What we need to do is constantly recognize [that] our role should be giving a little bit of control back to our audience members.”

He also spoke about the need to educate both the engineers designing the AI and the “masses” who are the intended consumers of it. “Journalists are doing a fantastic job at covering this topic,” he said, but “our job as practitioners is to…break this down to the audience so they have control about how machine learning and AI are used to impact them.” (The BBC has published explainer videos about the technology in the past.) “We have to remember, as media, we are gatekeepers to people’s understanding of the modern world.”

“It’s not about slowing down innovation but about deciding what’s at stake,” Shah said. “Choosing your pace is really important.”

Yaser Bishr, Al Jazeera Media Network

Bishr, who helped bring AJ+ to life and has since used Facebook to pull followers onto Al Jazeera’s new Jetty podcast network, also emphasized the need to tread carefully.

“The speed of evolution we are going through in AI far exceeds anything we’ve done before,” Bishr said, talking about the advancements made in the technology at large. “We’re all for innovation, but I think the discussion about regulating the policy needs to go at the same pace.”

In conversation with Shah, Rainer Kellerhais of Microsoft, and Ahmed Elmagarmid of the Qatar Computing Research Institute, Bishr reiterated the risks of AI algorithms putting people into boxes and cited Microsoft’s exiled Twitter bot as an example of input and output bias. “The risk is not only during the training of the machine, but also during the execution of the machine,” he said.

Elmagarmid countered his concern about speed: “Things are in motion but things are continuous,” he said calmly. “We have time to adapt to it. We have time to harness it. I think if we look back to the Industrial Revolution, look back to the steam engine…people are always perceiving new technology as threatening.

“At the end of the day you will have [not just] newsrooms, but much better and more efficient and smarter newsrooms,” Elmagarmid said.

“AI is not the Industrial Revolution,” Bishr said, adding to his earlier comments: “We’re not really in a hurry in using AI right now.”

Image from user Comfreak used under a Creative Commons license.

POSTED     March 19, 2018, 9 a.m.
SEE MORE ON Reporting & Production
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Dow Jones negotiates AI usage agreements with nearly 4,000 news publishers
Earlier this year, the WSJ owner sued Perplexity for failing to properly license its content. Now its research tool Factiva has negotiated its own AI licensing deals.
Back to the bundle
“If media companies can’t figure out how to be the bundlers, other layers of the ecosystem — telecoms, devices, social platforms — will.”
Religious-sounding language will be everywhere in 2025
“A great deal of language that looks a lot like Christian Nationalism isn’t actually calling for theocracy; it is secular minoritarianism pushed by secular people, often linked to rightwing cable and other media with zero meaningful ties to the church or theological principle.”