The year we get creeped out by algorithms

“Algorithmic judgment is the uncanny valley of computing.”

It turns out computers have a built-in “uncanny valley” (that creepy feeling android robots generate when they kind of look human). Just like we don’t want robots too human-shaped — we want them to know their place — it turns out we aren’t too happy when our computers go from “smart” (as in automating things and connecting us to each other or information) to “smart” (as in “let me make that decision for you”).

zeynep-tufeckiAlgorithmic judgment is the uncanny valley of computing.

Algorithms (basically computer programs, but here I’m talking about the complex subset that is being used to calculate results of some consequence, which then shape our experience) have become more visible in 2014, and it turns out we’re creeped out. The most visible, cited, discussed academic article of 2014 was one which exposed the fact that Facebook uses algorithms to manipulate its News Feed — something a majority of people apparently did not know. Most of the discussion was outrage: The lead author received hundreds of disgusted emails asking how dare he manipulate our social interactions on Facebook; the reality, of course, is that’s what Facebook does every day, algorithmically.

The Facebook experiment made visible what was always there, and raised more questions than it could answer.

2015 looks to be the year when we start grappling with the power and role of these complex algorithms — sometimes discussed as machine learning or artificial intelligence — and when a thousand more startups (and big companies, since they have a data advantage in fueling these types of algorithms) start trying to “deploy” them not just in apps and sites, but in our devices and objects as chips and sensors become more prevalent.

We’ve had computers for more than a century. What’s the new development? Three things, and all are significant.

One: Our devices are becoming more and more central to our social, personal, financial, and civic interactions. That feels like old news, but it’s barely a decade old, and the rate of expansion to the next few billion is still rapid. Digital mediation is now widespread enough to make algorithms widely consequential.

Two: Most digital mediation takes place on platforms and apps in which the true owner, the platform itself, keeps centralized control. This is a new kind of ownership/user experience. Think of a world in which your phone constantly checked in with the central phone company to decide which of your relatives it should allow you to call, and to jumble their sentences around in any order it deemed “better” (to keep you “engaged” and on the phone longer) — and served you ads in the middle? That’s many of your platforms today. We no longer truly own our intermediaries; instead they are guided by an invisible, algorithmic layer that neither answers to nor is accountable to us.

No wonder we’re creeped out.

Three: Algorithms are increasingly being deployed to make decisions where there is no right answer, only a judgment call. Google says it’s showing us the most relevant results, and Facebook aims to show us what’s most important. But what’s relevant? What’s important? Unlike other forms of automation or algorithms where there’s a definable right answer, we’re seeing the birth of a new era, the era of judging machines: machines that calculate not just how to quickly sort a database, or perform a mathematical calculation, but to decide what is “best,” “relevant,” “appropriate,” or “harmful.” It’s one thing to ask a computer the answer to a factoring problem, or the quickest driving path from point A to point B — it’s another to have a computer decide for us who among our friends is most “relevant” to us, or what piece of news is of most importance, or who should be hired (or fired).

Deep philosophical questions that humans have debated for millennia — and have erected complex (and far from perfect) gatekeeping, credentialing, and judging apparatus to grapple with — are now being asked to computers, and their answers, spat out through proprietary and opaque systems, are being used to shape our lives.

The spread of algorithmic judgment is much more significant than whether Big Blue beats Kasparov at chess, a game that was always unwieldy for humans and suitable for machine computation. Machines have out-muscled us for centuries and out-computed us for decades. Now they are going to judge for us, instead of us, and out-judge us.

And 2015 looks to be the year this becomes visible, widespread. Will it become even creepier? And will there be a backlash? That’s probably for 2016.

Zeynep Tufekci is an assistant professor at the University of North Carolina’s School of Information and Library Science.