While the book relies on familiar notions about the perils of the echo chamber, it uses those ideas as a starting point, rather than an ending, focusing on the algorithmic implications of all the echoing. One of the most intriguing aspects of Pariser’s argument is his exploration of the automation of preference — through the increasing influence of the Like button, through Google’s desire to make its results “relevant,” through various news orgs’ recommendation engines, and, ultimately, through media companies’ economic mandate to please, rather than provoke, their audiences.
That last one isn’t new, of course; news organizations have always navigated a tension between the need to know and the want to know when it comes to the information they serve to their readers. What is new, though, is the fact that audiences’ wants now have data to back them up; they can be analyzed and tailored and otherwise manipulated with a precision that is only increasing. Audiences’ needs, on the other hand, are generally as nebulous as they’ve ever been. But they are no less urgent.
So if we’re to truly gain from what the web offers us, Pariser argues, what we need is something like the kind of thinking that guided journalism through most of the 20th century: a notion that media companies serve more than, in every sense, pure interest. A conviction that news editors (and, more broadly, the fabled gatekeepers who exert power, even on the “democratized” web, over people’s access to information) have a responsibility to give people as full and nuanced a picture of the world as they can.
As much as we need filters, Pariser says, a web experience that is based on filters alone won’t give us that wide-angle view. And now, he argues, while online media remains in its infancy, is the time to do something to change that.
To learn more about Pariser’s thinking — and especially about how that thinking applies to news producers — I spoke with him when he came to Cambridge for a recent reading at the Harvard Book Store. Below is a transcript of our talk. (And apologies for the shaky camera work in the video above, which was shot in a bookstore office; apparently, I had a case of the Austeros that day.)
To begin with, I asked Pariser about a key aspect of this argument: the notion that the filter bubble phenomenon affects not only what the information we consume, but also our ability to put that information to use within a functional democracy. Here’s what he told me:
EP: What people care about politically, and what they’re motivated to do something about, is a function of what they know about and what they see in their media. We’ve known this for a while — that, for example, if you chop up television broadcast news, and show different sets of news to different groups of people, and then you poll them about what their preferences are, you get very different results. People see something about the deficit on the news, and they say, ‘Oh, the deficit is the big problem.’ If they see something about the environment, they say the environment is a big problem.
This creates this kind of a feedback loop in which your media influences your preferences and your choices; your choices influence your media; and you really can go down a long and narrow path, rather than actually seeing the whole set of issues in front of us.
MG: Interesting. So what should news organizations be doing, and how should they be thinking about this problem when they’re thinking about how they build their websites, and build their news experience?
EP: Well, I think, right now, it’s a little polarized. You actually have the old-school editors who say, ‘Only humans can do this.’ The New York Times, at least until recently, didn’t let even blog authors see how people were using or sharing their links; you had no sense of how you were doing online. That’s sort of one extreme. On the other extreme is this ‘if people click it, then it must be good’ mentality. And I think we need people who are smart about journalism to be thinking about how we import a lot of the implicit things that a front page does, or that a well-edited newspaper does — how do we import that into these algorithms that are going to affect how a lot of people experience the world? Whether — we might prefer that they not, but that’s sort of the way that this is going. So how do we do that? That seems like the big, exciting project right now.