Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Nov. 8, 2019, 11:06 a.m.
LINK: news.utexas.edu  ➚   |   Posted by: Laura Hazard Owen   |   November 8, 2019

Researchers attached EEGs to 83 undergrad students’ heads and tracked their brain activity as they analyzed whether fake news stories — including those that had been flagged as false — were fake. While the students showed “reactions of discomfort…when headlines supported their beliefs but were flagged as false,” that dissonance didn’t stop them from going with what they already believed:

This dissonance was not enough to make participants change their minds. They overwhelmingly said that headlines conforming with their preexisting beliefs were true, regardless of whether they were flagged as potentially fake. The flag did not change their initial response to the headline, even if it did make them pause a moment longer and study it a bit more carefully.

It didn’t matter whether the subjects identified as Republicans or Democrats: That “didn’t influence their ability to detect fake news,” lead author Patricia Moravec said, “and it didn’t determine how skeptical they were about what’s news and what’s not.” The students assessed only 44 percent of the stories accurately.

The study was published this week in Management Information Systems Quarterly.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”