The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This regular roundup offers the highlights of what you might have missed.
“Men Are Trash” quadrant. Facebook is changing the way its algorithms handle hate speech. It will spend more time policing actually vile content about underrepresented groups, and less time on comments “against whites, men, and Americans,” The Washington Post’s Elizabeth Dwoskin, Nitasha Tiku, and Heather Kelly reported Thursday. It’s an acknowledgment that “race-blind” content moderation still ends up favoring dominant groups: “Because describing experiences of discrimination can involve critiquing white people, Facebook’s algorithms often automatically removed that content, demonstrating the ways in which even advanced artificial intelligence can be overzealous in tackling nuanced topics.”
The overhaul, which is known as the WoW Project and is in its early stages, involves re-engineering Facebook’s automated moderation systems to get better at detecting and automatically deleting hateful language that is considered “the worst of the worst,” according to internal documents describing the project obtained by The Washington Post. The “worst of the worst” includes slurs directed at Blacks, Muslims, people of more than one race, the LGBTQ community and Jews, according to the documents.
As one way to assess severity, Facebook assigned different types of attacks numerical scores weighted based on their perceived harm. For example, the company’s systems would now place a higher priority on automatically removing statements such as “Gay people are disgusting” than “Men are pigs.”
disappointed this story was too late for “Men are Trash Quadrant” to have a shot at phrase of the year. speaking personally, it has dislodged everything else I’ve ever heard or learned.
— Craig Silverman (@CraigSilverman) December 3, 2020
Curious what this’ll look like in practice.
And of course now that it’s public, will FB now to inevitable pressure from bad faith actors who will claim reverse discrimination. https://t.co/9TtrZsZsJc
— Ian Sherr (@iansherr) December 3, 2020
A little more Facebook news this week: The company said Thursday that it will remove — not downrank or flag but completely take down — false information about the Covid-19 vaccine.
New: Facebook says it will start removing false claims about Covid-19 vaccines that have been debunked by public health experts on Facebook and Instagram. pic.twitter.com/IaCpCvEmBL
— Brandy Zadrozny (@BrandyZadrozny) December 3, 2020
And the Facebook Oversight Board formed back in May (here’s a good Verge story about it from October) announced the first six cases it will take on.
The Facebook @OversightBoard has announced its first six "cases". Five of the six have something in common- they were referred by users, and they all deal with images posted with an intent other than what the image itself contains, per the users. https://t.co/AQ9sSp4wwn
— Justin Hendrix (@justinhendrix) December 1, 2020
the one case referred by Facebook (rather than users) to the Oversight Board for adjudication in its first batch of cases has to do with COVID-19 misinformation https://t.co/mcp9O6zO3J pic.twitter.com/dBZDpGPuBE
— Alexios (@Mantzarlis) December 1, 2020
I wrote about the first Facebook Oversight Board cases, nipples, and the limits of a one-size-fits-all set of community standards for the world pic.twitter.com/n0xrVR5lsp
— Casey Newton (@CaseyNewton) December 2, 2020
“Alarmingly, students’ approach was consistent with guidelines.” Commonly accepted media literacy techniques “make students susceptible to scammers, rogues, bad actors, and hate mongers,” Sam Wineburg, Joel Breakstone, Nadav Ziv, and Mark Smith of the Stanford History Education Group write in a recent working paper. (I’ve written about Wineburg’s research in the past.) Here’s how the study worked:
We surveyed 263 college sophomores, juniors, and seniors at a large state university on the East Coast. On one
task, students evaluated the trustworthiness of a “news story” that came from a satirical website. On a second task, students evaluated the website of a group claiming to sponsor “nonpartisan research.” In fact the site was created by a Washington, D.C., public relations firm run by a former corporate lobbyist. For both tasks, students had a live internet connection and were instructed to “use any online resources” to make their evaluations.
The students did not do a good job: Over two-thirds never recognized that the first website, “The Seattle Tribune,” was satirical, even though the site says on its About page that it is “a news and entertainment satire web publication.” 95% never figured out that the second site, MinimumWage.com, was created by a PR firm that lobbies to keep the minimum wage low. (If you Google “Employment Policies Institute,” the site’s parent organization, you’ll see what it is on the first page of results.
One big problem, the authors write, is that students rely on the techniques “recommended by college and university websites” to ascertain sources’ validity.
The most ubiquitous tool for teaching web credibility at the college level is known as the CRAAP test, a set of guidelines corresponding to the categories of Currency, Relevance, Authority, Accuracy, and Purpose (hence, CRAAP). A Google search brings up more than 100,000 results for the CRAAP test, which can be found on the websites of elite research universities, regional colleges, and scores of institutions in between. The CRAAP test prompts students to ask questions (sometimes as many as 30) to assess a site’s credibility. While the kinds and number of questions vary, most versions of the CRAAP test direct students’ attention to a site’s top-level domain, the information on its About page, the authority of its links, the presence or absence of banner ads, the listing of contact information, and the currency and frequency of updates. The basic assumptions of the CRAAP test are rooted in an analog age: Websites are like print texts. The best way to evaluate them is to read them carefully. But websites are not variations of print documents. The internet operates by wholly different rules.
Students focused, for instance, on the .com vs. .org domains, believing that .com domains were automatically more suspicious than .org domains. (As the authors note, “Practically every bona fide news source, from The New York Times to The Wall Street
Journal to The Washington Post, is registered as a dot-com…If dot-coms ignited students’ suspicion, what boosted their confidence? A website that ended in dot-org.” Anybody can get a dot-org. domain.) They were also likely to believe organizations’ About pages, and “failed to realize that for many groups the About page could just as easily be called the spin page.” They were overly trusting of hyperlinks linking out to reputable news sources, though “when a site like minimumwage.com links to The New York Times or the Columbia Journalism Review, the hope is that the reputation of the link will carry the day — just as it did for these college students.” And they judged sites by their looks; because minimumwage.org looks reputable, they believed that it was.
Overall, the researchers found a lot of misplaced effort — students spent a lot of time focused on the guidelines provided by higher-learning institutions when just clicking away from the site for a minute to read its Wikipedia entry would have told them it wasn’t reputable.
Common sense seems to dictate that we should examine a website carefully in order to judge its credibility. That’s the advice many colleges recommend: consider a site’s domain, examine its About page, search for the telltale signs that something’s off (like flashing banner ads or clickbait), check to see that the site provides contact information, and verify that its links are in working order. This approach, however, does more than mislead. Spending precious minutes scouring a site before first determining whether it’s worth the effort is a colossal waste of time.
You can read the full report, with recommendations, here.