Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
July 17, 2019, 2:58 p.m.
LINK: journals.sagepub.com  ➚   |   Posted by: Joshua Benton   |   July 17, 2019

Syracuse’s Whitney Phillips — scholar of the darker corners of Internet culture, author of “The Oxygen of Amplification,” last seen here offering this dire observation/prediction last winter — has a new paper out in Social Media + Society that might make be a bracing experience for some Nieman Lab readers.

When we think of the nightmarish edge of online culture — the trolling, the disinformation, the rage, the profound information pollution — it’s easy to think of the worst offenders. 4chan denizens, for-the-lulz trolls, actual Nazis — you know the type. But, she writes, maybe the origins of those phenomena aren’t only in those dark corners of Internet culture — maybe they’re also in the kind of good Internet culture, the kind that people sometimes get nostalgic about.

I used to believe that the internet used to be fun. Obviously the internet isn’t fun now. Now, keywords in internet studies—certainly, keywords in my own internet studies—include far-right extremism, media manipulation, information pollution, deep state conspiracy theorizing, and a range of vexations stemming from the ethics of amplification.

Until fairly recently, I would sigh and say, remember when memes were funny? When the stakes weren’t so high? I wish it was like that still. I was not alone in these lamentations; when I would find myself musing such things, it was often in the company of other internet researchers, or reporters covering the technology and digital culture beat. Boy oh boy oh boy, we would say. What we wouldn’t do to go back then. It was a simpler time.

…internet/meme culture was a discursive category, one that aligned with and reproduced the norms of whiteness, maleness, middle-classness, and the various tech/geek interests stereotypically associated with middle-class white dudes. In other words: this wasn’t internet culture in the infrastructural sense, that is, anything created on or circulated through the networks of networks that constitute the thing we call The Internet. Nor was it meme culture in the broad contemporary sense, which, as articulated by An Xiao Mina , refers to processes of collaborative creation, regardless of the specific objects that are created. This was a particular culture of a particular demographic, who universalized their experiences on the internet as the internet, and their memes as what memes were.

Now, there is much to say about the degree to which “mainstream” internet culture—at least, what was described as internet culture by its mostly white participants—overlapped with trolling subculture on and around 4chan’s /b/ board, where the subcultural sense of the term “trolling” first emerged in 2003…the intertwine between 4chan and “internet culture” is so deep that you cannot, and you should not, talk about one without talking about the other. However, while trolling has—rightly—been resoundingly condemned for the better part of a decade, the discursive category known as internet culture has, for just as long, been fawned over by advertisers and other entertainment media. The more jagged, trollish edges of “internet culture” may have been sanded off for family-friendly consumption, but the overall category and its distinctive esthetic—one that hinges on irony, remix, and absurd juxtaposition—has in many ways fused with mainstream popular culture.

Specifically, it was the breadth of types within this sort of earlier-web content that opened the door for what we’ve since seen:

The fact that so many identity-based antagonisms, so many normative race and gender assumptions, and generally so much ugliness was nestled alongside all those harmless and fun and funny images drills right to the root of the problem with internet culture nostalgia. A lot of “internet culture” was harmless and fun and funny. But it came with a very high price of entry. To enjoy the fun and funny memes, you had to be willing—you had to be able—to deal with all the ugly ones. When faced with this bargain, many people simply laughed at both. It was hard to take Nazi memes all that seriously when they were sandwiched between sassy cats and golf course enforcement bears, and so, fun and ugly, ugly and fun, all were flattened into morally equivalent images in a flipbook. Others selectively ignored the most upsetting images, or at least found ways to cordon them off as being “just” a joke, or more frequently, “just” trolling, on “just” the internet.

Of course, only certain kinds of people, with certain kinds of experiences, would be able and willing to affect such indiscriminate mirth. Similarly, only certain kinds of people, with certain kinds of experiences, would be able and willing to say, “ok, yes, I know that image is hateful and dehumanizing, so I will blink and not engage with it, or you know, maybe chuckle a little to myself, but I won’t save it, and I won’t post anything in response, and instead will wait patiently until something that’s ok for me to laugh at shows up.”

Phillips calls that response the “ability to disconnect from consequence, from specificity, from anything but one’s own desire to remain amused forever.” And — apologies for all the blockquoting, but it’s good! — she ties that back to some of the journalists who covered this space when its public impact turned more serious down the road.

Very quickly, I realized that many of the young reporters who initially helped amplify the white nationalist “alt right” by pointing and laughing at them, had all come up in and around internet culture-type circles. They may not have been trolls themselves, but their familiarity with trolling subculture, and experience with precisely the kind of discordant swirl featured in the aforementioned early-2000s image dump, perfectly prepped them for pro-Trump shitposting. They knew what this was. This was just trolls being trolls. This was just 4chan being 4chan. This was just the internet. Those Swastikas didn’t mean anything. They recognized the clothes the wolf was wearing, I argued, and so they didn’t recognize the wolf.

This was how the wolf operated: by exploiting the fact that so many (white) people have been trained not to take the things that happen on the internet very seriously. They operated by laundering hate into the mainstream through “ironically” racist memes, then using all that laughter as a radicalization and recruitment tool. They operated by drawing from the media manipulation strategies of the subcultural trolls who came before, back when these behaviors were, to some anyway, still pretty funny.

Go read the whole thing, but here’s the lesson to take from it:

Most foundationally, shaking your head disapprovingly at the obvious villains—the obvious manipulators, the obvious abusers, the obvious fascists—isn’t enough. Abusers, manipulators, and fascists on the internet (or anywhere) certainly warrant disapproving head shakes, and worse. But so does a whole lot else. Pressingly, the things that were—and that for some people, still are—fun and funny and apparently harmless need more careful unpacking. Fun and funny and apparently harmless things have a way of obscuring weapons that privileged people cannot see, because they do not have to see them.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”