Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Oct. 9, 2020, 10:47 a.m.
Audience & Social

Crowds of regular people are as good at moderating fake news on Facebook as professional fact-checkers

“Crowdsourcing is a promising approach for helping to identify misinformation at scale.”

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Facebook’s fact-checking program relies on third-party fact-checking and news organizations to help weed out fake and misleading news on the platform. That limits the program from scaling up, since there are only so many organizations that do this work.

But what if Facebook could use regular people — people whose day job is not fact-checking — to fact-check articles instead? A new working paper from researchers at MIT suggests that it would be surprisingly effective, even if those people are only reading headlines. “Crowdsourcing is a promising approach for helping to identify misinformation at scale,” the paper’s authors write.

The team used a set of 207 news articles that had been flagged for fact-checking by Facebook’s internal algorithm. (The study was done in collaboration with Facebook’s Community Review team and funding from the Hewlett Foundation.) Two groups were then asked to check the accuracy of the articles: Three professional fact-checkers, who researched the entire articles to make their verdicts, and 1,128 Americans found on Mechanical Turk who determined accuracy based online on the articles’ headlines and lede sentences. The result:

We find that the average rating of a politically balanced crowd of 10 laypeople is as correlated with the average fact-checker rating as the fact-checkers’ ratings are correlated with each other. Furthermore, the layperson ratings can predict whether the majority of fact-checkers rated a headline as “true” with high accuracy, particularly for headlines where all three fact-checkers agree. We also find that layperson cognitive reflection, political knowledge, and Democratic Party preference are positively related to agreement with fact-checker ratings; and that informing laypeople of each headline’s publisher leads to a small increase in agreement with fact-checkers.

This is significant partly because Facebook’s official fact-checking partners really don’t get through that much content: In January, The Hill reported that “Facebook’s six [U.S.] partners have 26 full-time staff and fact-checked roughly 200 pieces of content per month.” Not only is this a drop in the bucket, but “it also has the potential to increase belief in, and sharing of, misinformation that fails to get checked via the ‘implied truth effect,'” the researchers write. “People may infer that lack of warning implies that a claim has been verified. Furthermore, even when fact-check warnings are successfully applied to misinformation, their impact may be reduced by lack of trust.” (In 2019, Pew found that 70% of Republicans, and 48% of Americans overall, think fact-checkers are biased.)

In this study, the professional fact-checkers usually — but not always — agreed with each other. “At least two out of three fact-checkers’
categorical ratings agreed for over 90% of the articles,” the authors write. Still, “this level of variation in the fact-checker ratings has important implications for fact-checking programs, such as emphasizing the importance of not relying on ratings from just a single fact-checker for certifying veracity, and highlighting that ‘truth’ is often not a simple black and white classification problem.”

With that in mind, they turned to the MTurkers.

“A relatively small number of laypeople can produce an aggregate judgment, given only the headline and lede of an article, that approximates the individual judgments of professional fact-checkers,” the authors write. It doesn’t mean that professionals are no longer necessary: “We see crowdsourcing as just one component of a misinformation detection system that incorporates machine learning, layperson ratings, and expert
judgments.”

Art by Daniel Spacek on Behance.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Oct. 9, 2020, 10:47 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”