DOHA, Qatar — If robot reporters are going to deploy from drones in war zones in the future, at what point do we have the conversation about the journalism ethics of all this?
The robots may still be a few years away, but the conversation is happening now (at least about today’s AI technology in newsrooms). At Al Jazeera’s Future of Media Leaders’ Summit earlier this month, a group of experts in areas from media to machine learning discussed how their organizations frame the ethics behind (and in front of!) artificial intelligence.
Ethical AI was one of several topics explored during the gathering in Qatar, focused on data security, the cloud, and how artificial intelligence can automate and augment journalism. (“Data has become more valuable than oil,” Mohamed Abuagla told the audience in the same presentation as the drone-reporter concept.)Can a #drone drop a robot into a war zone to report? Doesn’t take a meal break- but can’t empathise with a story! Relationship between man and machine- our future co workers #FMLSummit @AlJazeera pic.twitter.com/Bs2NLWcoNp
— Morwen Williams (@morwenw) March 5, 2018
AI has already been seeded into the media industry, from surfacing trends for story production to moderating comments. Robotic combat correspondents may still be a far-fetched idea. But with machine learning strengthening algorithms day by day and hour by hour, AI innovations are occurring at a breakneck pace. Machines are more efficient than humans, sure. But in a human-centric field like journalism, how are newsrooms putting AI ethics into practice?
Ali Shah, the BBC’s head of emerging technology and strategic direction, explained his approach to the moral code of AI in journalism. Yaser Bishr, Al Jazeera Media Network’s executive director of digital, also shared some of his thinking on the future of AI in journalism. Here are some of the takeaways:
In both his keynote speech and subsequent panel participation, Shah walked the audience through the business and user implications of infusing AI into parts of the BBC’s production processes. He continued returning to the question of individual agency. “Every time we’re making a judgment about when to apply [machine learning]…what we’re really doing is making a judgment about human capacity,” he said. “Was it right for me to automate that process? When I’m talking about augmenting someone’s role, what judgment values am I augmenting?”
Shah illustrated how the BBC has used AI to perfect camera angles and cuts when filming, search for quotes in recorded data more speedily, and make recommendations for further viewing when the credits are rolling on the BBC’s online player. (The BBC and Microsoft have also experimented with a voice interface AI.) But he emphasized how those AI tools are intended to automate, augment, and amplify human journalists’ work, not necessarily replace or supersede them. “Machine learning is not going to be the answer to every single problem that we face,” he said.
The BBC is proud to be one of the world’s most trusted news brands, and Shah pointed to the need for balance between trust in the organization and individual agency. “We’re going to have to strike a balance between the utility and the effectiveness and the role it plays in society and in our business,” he said. “What we need to do is constantly recognize [that] our role should be giving a little bit of control back to our audience members.”
He also spoke about the need to educate both the engineers designing the AI and the “masses” who are the intended consumers of it. “Journalists are doing a fantastic job at covering this topic,” he said, but “our job as practitioners is to…break this down to the audience so they have control about how machine learning and AI are used to impact them.” (The BBC has published explainer videos about the technology in the past.) “We have to remember, as media, we are gatekeepers to people’s understanding of the modern world.”
“The use of AI changes our behavior. Decisions are influenced and social norms change/evolve but we must make sure we are consciously aware of the effects of machine learning.”- Ali Shah of @BBC at #FMLSummit #AI
— Al Jazeera PR (@AlJazeera) March 6, 2018
“It’s not about slowing down innovation but about deciding what’s at stake,” Shah said. “Choosing your pace is really important.”
“The speed of evolution we are going through in AI far exceeds anything we’ve done before,” Bishr said, talking about the advancements made in the technology at large. “We’re all for innovation, but I think the discussion about regulating the policy needs to go at the same pace.”
In conversation with Shah, Rainer Kellerhais of Microsoft, and Ahmed Elmagarmid of the Qatar Computing Research Institute, Bishr reiterated the risks of AI algorithms putting people into boxes and cited Microsoft’s exiled Twitter bot as an example of input and output bias. “The risk is not only during the training of the machine, but also during the execution of the machine,” he said.
Dr Yaser Bishr of @ALJazeera: “In the context of generating news and information, there is some degree of bias because the info is gathered by a human so there is that editorial interference” #FMLSummit #journalism #AI
— Al Jazeera PR (@AlJazeera) March 6, 2018
Elmagarmid countered his concern about speed: “Things are in motion but things are continuous,” he said calmly. “We have time to adapt to it. We have time to harness it. I think if we look back to the Industrial Revolution, look back to the steam engine…people are always perceiving new technology as threatening.
“At the end of the day you will have [not just] newsrooms, but much better and more efficient and smarter newsrooms,” Elmagarmid said.
“AI is not the Industrial Revolution,” Bishr said, adding to his earlier comments: “We’re not really in a hurry in using AI right now.”