Bots don’t actually write Olive Garden commercials, folks — at least not yet.
These "I forced a bot to watch X" posts are almost certainly 100% human-written with no bot involved. Here's how you can tell. 1/12 https://t.co/4wVxfraqZS
— Janelle Shane (@JanelleCShane) June 14, 2018
But they can get trapped in an infinite loop of screaming and self-care.
Two auto-replying bots have now been stuck in a loop with each other for several hours and the resulting thread reads like most of my internal monologue: pic.twitter.com/QK5NThBTAN
— Marie Le Conte (@youngvulgarian) June 7, 2018
These two Twitter-famous bot moments, just a week apart earlier this year, show how gullible humans can be about what bots are and how they’re used. (But they did show some pretty strong feelings about Olive Garden.)
Two-thirds of Americans have heard of social media bots. (Good!) Eighty percent of those say bots are mostly used with bad intentions, compared to 17 percent saying they’re used for good, according to a Pew Research Center survey out today. (Meh.) The survey was conducted among 4,581 respondents in the end of July and August, after those bot Tweets blew up.
But there’s obviously a more serious side to this as well — when you have foreign (or domestic!) actors using bots to spray dis- and misinformation into the U.S.’s (or any country’s) social sphere, residents are at greater risk of not recognizing what a bot is and blindly believing what a “bot” puts out.
47 percent of those who have heard about bots say they’re at least somewhat confident they can recognize bots on social media — but only seven percent of that are “very confident.” On the other side, 15 percent say they’re not equipped at all to distinguish the bots. Pew points out that in an earlier study, 84 percent of Americans said they were confident in their ability to recognize fabricated news stories.
Unsurprisingly, this can be broken down by age: Around 60 percent of Americans age 18-29 who have heard of bots are at least somewhat confident in their ability to distinguish, compared to less than 30 percent or so for older adults.
Many Americans think that bots are involved with the news residents receive on social media:
And, just as Americans are concerned about bots generally, many in the public perceive bots’ involvement in the news to be negative, at least when it comes to how well-informed the public is about the news. About two-thirds of those who have heard about social media bots (66%) say that these accounts have a mostly negative effect on how well-informed Americans are about events and issues in the news. In contrast, only 11% believe bots have a mostly positive effect, and about two-in-ten (21%) say they do not have much of an effect.
What’s more, those who think bots are responsible for a sizable portion of the news on social media are also more likely to think bots have a negative impact on keeping the public informed. Among those who say at least a fair amount of news on social media comes from bots, about seven-in-ten (72%) say that bots negatively impact how well-informed Americans are about the news, compared with 11% who say bots have a positive impact and 17% who say they have no impact.
There are some majority-acceptable uses of bots, though: the government deploying bots for emergency updates, businesses promoting products via bots, and a company answering customers’ questions with bots. The respondents are split 50-49 on news organizations using bots to post headlines and stories.
(I’m glad at least 92 percent of us are on the same page about not using a bot for sharing made-up and/or false information.)
Read the full findings here.
Leave a comment
You must be logged in to post a comment.