Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Nov. 3, 2017, 8:30 a.m.
Audience & Social

Next up in the world of “information disorder”: Messaging apps and doctored audio and video

Plus: Facebook, Google, and Twitter face Congress, and new research into the spread of misinformation on WeChat.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

“States or counties experiencing more fake news visitations also tended to vote for Donald Trump.” A Microsoft Research team analyzed traffic to fake news websites leading up to the 2016 U.S. presidential election and found that — at least among users of desktop browsers Microsoft Internet Explorer 11 and Edge (browsers that combined make up just about 16 percent of desktop browser usage) — “states or counties experiencing more fake news visitations also tended to vote for Donald Trump.”

“It remains to be shown if similar trends occur for other browsers and in mobile scenarios — 51.7 percent of Facebook’s worldwide active monthly users access the site exclusively from mobile devices,” they write.

Facebook, Google, and Twitter face Congress. My colleague Shan Wang wrote up Wednesday’s hearings. (“What is a bot versus a troll?”)

Future trends in “information disorder.” A report released this week by Council of Europe, with support from the Shorenstein Center and First Draft, looks at “information disorder”:

While the historical impact of rumors and fabricated content have been well documented, we argue that contemporary social technology means that we are witnessing something new: information pollution at a global scale; a complex web of motivations for creating, disseminating and consuming these “polluted” messages; a myriad of content types and techniques for amplifying content; innumerable platforms hosting and reproducing this content; and breakneck speeds of communication between trusted peers.

The report by Claire Wardle and Hossein Derakhshan is a long, good overview of the space. I was particularly interested in the fourth section on future trends: “Much of the focus has been on the Facebook News Feed. But even a cursory glance outside of the U.S. demonstrates that the next frontier for mis- and dis-information is closed-messaging apps.” And, they write, “our biggest challenge will be the speed at which technology is refining the creation of fabricated video and audio.” For instance:

Audio can be manipulated even more easily than video. Adobe has created Project VoCo, which has been nicknamed ‘Photoshop for audio.’ The product allows users to feed a 10-to-20 minute clip of of someone’s voice into the application and then dictate words in that person’s exact voice. Another company called Lyrebird is working on voice generation. On its site, it claims to “need as little as one minute of audio recording of a speaker to compute a unique key defining her/his voice. This key will then generate anything from its corresponding voice.” It also plans to create an API whereby other platforms could easily use those voices.

“A dazzling array of content generators and the potential for polarization.” Chi Zhang writes about how her team at the University of Southern California is collaborating with The Alhambra Source and Asian American Advancing Justice, with support from the Tow Center for Digital Journalism, to “assess the nature of bias and misinformation in WeChat and ethnic Chinese media and to explore strategies for intervention.” During the 2016 U.S. presidential election, “anti-Hillary memes and conspiracy theories about sharia law found their way onto the mobile messaging platform, which serves the growing number of Chinese immigrants in the United States.”

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Nov. 3, 2017, 8:30 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”