These were some of the questions posed Tuesday at a panel discussion held by the Tow Center for Digital Journalism and the Brown Institute for Media Innovation at Columbia University that tried to address these questions about the ethics of AI powered journalism products.
Tools such as machine learning or natural language processing require vast amounts of data to learn to behave like a human, and Amanda Levendowski, a clinical teaching fellow at the NYU’s law school, listed a series of considerations that must be thought about when trying to access data to perform these tasks.
“What does it mean for a journalist to obtain data both legally and ethically? Just because data is publicly available does not necessarily mean that it’s legally available, and it certainly doesn’t mean that it’s necessarily ethically available,” she said. “There’s a lot of different questions about what public means — especially online. Does it make a difference if you show it to a large group of people or small group of people? What does it mean when you feel comfortable disclosing personal information on a dating website versus your public Twitter account versus a LinkedIn profile? Or if you choose to make all of those private, what does it meant to disclose that information?”
#TowAI @levendowski: AI requires data about people created by people. Just because it's there, doesn't mean it's legal to use.
— Simon Galperin (@thensim0nsaid) June 13, 2017
For example, Levendowski highlighted the fact that many machine learning algorithms were trained on a cache of 1.6 million emails from Enron that were released by the federal government in the early 2000s. Companies are risk averse, she said, and they prefer to use publicly available data sets, such as the Enron emails or Wikipedia, but those datasets can produce biases.
#TowAI TIL from @levendowski: Many machine learning algorithms have been trained on the Enron Emails dataset: https://t.co/YcYKjsQz5B
— Jon Keegan (@jonkeegan) June 13, 2017
“But when you think about how people use language using a dataset by oil and gas guys in Houston who were convicted of fraud, there are a lot of biases that are going to be baked into that data set that are being handed down and not just imitated by machines, but sometimes amplified because of the scale, or perpetuated, and so much so that now, even though so many machine learning algorithms have been trained or touched by this data set, there are entire research papers dedicated to exploring the gender-race power biases that are baked into this data set.”
@levendowski says: we don't understand the many ways people use language, so it's hard to teach bots how to use it. #towai
— Meredith Broussard (@merbroussard) June 13, 2017
The whole panel featured speakers such as John Keefe, the head of Quartz’s bot studio; BuzzFeed data scientist Gilad Lotan; iRobot director of data science Angela Bassa; Slack’s Jerry Talton, Columbia’s Madeleine Clare Elish, and (soon-to-be Northwestern professor) Nick Diakopoulos. The full video of the panel (and the rest of the day’s program) is available here and is embedded above; the panel starts about eight minutes in.
Leave a comment