Nieman Foundation at Harvard
HOME
          
LATEST STORY
Newsweek is making generative AI a fixture in its newsroom
ABOUT                    SUBSCRIBE
Dec. 4, 2020, 12:45 p.m.
Audience & Social

Facebook will spend less time policing “Men are trash” content, more time taking down “Worst of the Worst”

Plus: Cut the CRAAP, and Facebook’s Oversight Board announces the first cases it will take on.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This regular roundup offers the highlights of what you might have missed.

“Men Are Trash” quadrant. Facebook is changing the way its algorithms handle hate speech. It will spend more time policing actually vile content about underrepresented groups, and less time on comments “against whites, men, and Americans,” The Washington Post’s Elizabeth Dwoskin, Nitasha Tiku, and Heather Kelly reported Thursday. It’s an acknowledgment that “race-blind” content moderation still ends up favoring dominant groups: “Because describing experiences of discrimination can involve critiquing white people, Facebook’s algorithms often automatically removed that content, demonstrating the ways in which even advanced artificial intelligence can be overzealous in tackling nuanced topics.”

The overhaul, which is known as the WoW Project and is in its early stages, involves re-engineering Facebook’s automated moderation systems to get better at detecting and automatically deleting hateful language that is considered “the worst of the worst,” according to internal documents describing the project obtained by The Washington Post. The “worst of the worst” includes slurs directed at Blacks, Muslims, people of more than one race, the LGBTQ community and Jews, according to the documents.

As one way to assess severity, Facebook assigned different types of attacks numerical scores weighted based on their perceived harm. For example, the company’s systems would now place a higher priority on automatically removing statements such as “Gay people are disgusting” than “Men are pigs.”

A little more Facebook news this week: The company said Thursday that it will remove — not downrank or flag but completely take down — false information about the Covid-19 vaccine.

And the Facebook Oversight Board formed back in May (here’s a good Verge story about it from October) announced the first six cases it will take on.

“Alarmingly, students’ approach was consistent with guidelines.” Commonly accepted media literacy techniques “make students susceptible to scammers, rogues, bad actors, and hate mongers,” Sam Wineburg, Joel Breakstone, Nadav Ziv, and Mark Smith of the Stanford History Education Group write in a recent working paper. (I’ve written about Wineburg’s research in the past.) Here’s how the study worked:

We surveyed 263 college sophomores, juniors, and seniors at a large state university on the East Coast. On one
task, students evaluated the trustworthiness of a “news story” that came from a satirical website. On a second task, students evaluated the website of a group claiming to sponsor “nonpartisan research.” In fact the site was created by a Washington, D.C., public relations firm run by a former corporate lobbyist. For both tasks, students had a live internet connection and were instructed to “use any online resources” to make their evaluations.

The students did not do a good job: Over two-thirds never recognized that the first website, “The Seattle Tribune,” was satirical, even though the site says on its About page that it is “a news and entertainment satire web publication.” 95% never figured out that the second site, MinimumWage.com, was created by a PR firm that lobbies to keep the minimum wage low. (If you Google “Employment Policies Institute,” the site’s parent organization, you’ll see what it is on the first page of results.

One big problem, the authors write, is that students rely on the techniques “recommended by college and university websites” to ascertain sources’ validity.

The most ubiquitous tool for teaching web credibility at the college level is known as the CRAAP test, a set of guidelines corresponding to the categories of Currency, Relevance, Authority, Accuracy, and Purpose (hence, CRAAP). A Google search brings up more than 100,000 results for the CRAAP test, which can be found on the websites of elite research universities, regional colleges, and scores of institutions in between. The CRAAP test prompts students to ask questions (sometimes as many as 30) to assess a site’s credibility. While the kinds and number of questions vary, most versions of the CRAAP test direct students’ attention to a site’s top-level domain, the information on its About page, the authority of its links, the presence or absence of banner ads, the listing of contact information, and the currency and frequency of updates. The basic assumptions of the CRAAP test are rooted in an analog age: Websites are like print texts. The best way to evaluate them is to read them carefully. But websites are not variations of print documents. The internet operates by wholly different rules.

Students focused, for instance, on the .com vs. .org domains, believing that .com domains were automatically more suspicious than .org domains. (As the authors note, “Practically every bona fide news source, from The New York Times to The Wall Street
Journal to The Washington Post, is registered as a dot-com…If dot-coms ignited students’ suspicion, what boosted their confidence? A website that ended in dot-org.” Anybody can get a dot-org. domain.) They were also likely to believe organizations’ About pages, and “failed to realize that for many groups the About page could just as easily be called the spin page.” They were overly trusting of hyperlinks linking out to reputable news sources, though “when a site like minimumwage.com links to The New York Times or the Columbia Journalism Review, the hope is that the reputation of the link will carry the day — just as it did for these college students.” And they judged sites by their looks; because minimumwage.org looks reputable, they believed that it was.

Overall, the researchers found a lot of misplaced effort — students spent a lot of time focused on the guidelines provided by higher-learning institutions when just clicking away from the site for a minute to read its Wikipedia entry would have told them it wasn’t reputable.

Common sense seems to dictate that we should examine a website carefully in order to judge its credibility. That’s the advice many colleges recommend: consider a site’s domain, examine its About page, search for the telltale signs that something’s off (like flashing banner ads or clickbait), check to see that the site provides contact information, and verify that its links are in working order. This approach, however, does more than mislead. Spending precious minutes scouring a site before first determining whether it’s worth the effort is a colossal waste of time.

You can read the full report, with recommendations, here.

Photo by NeONBRAND on Unsplash.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura_owen@harvard.edu) or Twitter DM (@laurahazardowen).
POSTED     Dec. 4, 2020, 12:45 p.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Newsweek is making generative AI a fixture in its newsroom
The legacy publication is leaning on AI for video production, a new breaking news team, and first drafts of some stories.
Rumble Strip creator Erica Heilman on making independent audio and asking people about class
“I only make unimportant things now, but it’s all the unimportant things that really make up our lives.”
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”