Nieman Foundation at Harvard
HOME
          
LATEST STORY
Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
ABOUT                    SUBSCRIBE
Aug. 16, 2018, 1:29 p.m.
Audience & Social
LINK: www.washingtonpost.com  ➚   |   Posted by: Shan Wang   |   August 16, 2018

All unhappy social media networks are unhappy in their own ways.

Twitter has capped off a weird week of equivocating over the presence of Alex Jones and InfoWars on its platform as other platforms like Facebook and YouTube finally decided to boot Infowars content. (Jones is currently facing a seven-day mini-ban, putting his account in “read-only mode.” He can still read tweets and send DMs, but he can’t tweet himself, like, or RT anything.)

Now, Twitter CEO Jack Dorsey says the platform will be experimenting with “features that would promote alternative viewpoints in Twitter’s timeline to address misinformation,” according to an interview Dorsey gave to the Washington Post. Twitter would also consider adding “context” around false tweets, a practice YouTube is also testing through partnerships with Wikipedia and Encyclopedia Brittanica. From the Post:

Dorsey said Twitter hasn’t changed its incentives, which were originally designed to nudge people to interact and keep them engaged, in the 12 years since Twitter was founded. “We often turn to policy to fix a lot of these issues, but I think that is only treating surface-level symptoms that we are seeing,” Dorsey said.

With more limited resources than Facebook or Google, though, Twitter has to be selective about its investments in safety. “Choosing to do one of them comes at a cost of not doing something else because of the number of resources we have,” Dorsey said.

One solution Twitter is exploring is to surround false tweets with factual context, Dorsey said. Earlier this week, a tweet from an account that parodied Peter Strzok, an FBI agent fired for his anti-Trump text messages, called the president a “madman” and has garnered more than 56,000 retweets. More context about a tweet, including “tweets that call it out as obviously fake,” could help people “make judgments for themselves,” Dorsey said.

Dorsey also told Post reporters that the platform would think about labeling automated accounts as bots. It would consider major design tweaks such as the changing the “heart” button and how the platform displays user follower counts.

I like the sound of labeling automated accounts posing as humans. And as someone who uses “likes” solely as a bookmarking function, I say, please do bring back faves. But adding “alternative viewpoints” into someone’s timeline, in order to break “echo chambers,” sounds like a misplaced band-aid for a problem that’s of Twitter’s own creation, or at least a problem that’s the natural outcome of years of uneven enforcement of policies (Dorsey has highlighted Twitter’s efforts consistent enforcement in interviews, and criticized the inconsistency of other platforms’). And, as those on the misinformation and media literacy beat have pointed out, giving people “context” as a corrective still makes it seem like it’s up to the individual user to decide whether a fact is really a fact.

As my colleague Laura Hazard Owen pointed out, if you don’t like Twitter, then quit it. Or don’t. It’s up to you and how much you can put up with, and how much you gain from using the social network:

Ultimately, what it probably comes down to is These platforms are terrible, are they ultimately enough of a net positive for me individually that I stay on them? Sure, you can wrap your decision in more high-minded language, but ultimately the decision of whether to stay on social media or leave it feels incredibly minor compared to the other moral dilemmas we’re faced with now, and ultimately I think most people’s decisions to leave or stay will be based on whether the platform is enough fun for them anymore….

We can parse these executives’ language all we want. We can discuss how, even if the Alex Jones case seemed clear-cut, it’s a slippery slope and other cases will be tougher. But in the end all we really have to go by is the companies’ actions. The mistake right now is to expect them to be leaders in any way.

Or, go back to Tumblr.

Show tags Show comments / Leave a comment
 
Join the 50,000 who get the freshest future-of-journalism news in our daily email.
Notifications every 2 minutes: This in-depth look at how people really use WhatsApp shows why fighting fake news there is so hard
“In India, citizens actively seem to be privileging breadth of information over depth…Indians at this moment are not themselves articulating any kind of anxiety about dealing with the flood of information in their phones.”
Facebook probably didn’t want to be denying it paid people to create fake news this week, but here we are
Plus: WhatsApp pays for misinformation research and a look at fake midterm-related accounts (“heavy on memes, light on language”).
How The Wall Street Journal is preparing its journalists to detect deepfakes
“We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?”