Nieman Foundation at Harvard
HOME
          
LATEST STORY
Seeking “innovative,” “stable,” and “interested”: How The Markup and CalMatters matched up
ABOUT                    SUBSCRIBE
April 16, 2019, 11:38 a.m.
LINK: hapgood.us  ➚   |   Posted by: Joshua Benton   |   April 16, 2019

Yesterday, I wrote about YouTube’s algorithmic screwup which somehow associated images of Notre-Dame Cathedral burning with the 9/11 attacks and embedded information about those attacks under news organizations’ live streams from Paris. And in that piece, I noted a number of other times YouTube’s algorithm — which is meant to put reliable information under conspiracy videos on topics like the moon landing, the Sandy Hook shootings, and yes, 9/11 — had screwed up.

(At various points, YouTube has put 9/11 information under videos of 1970s New York at Christmastime, a rocket launch, a “Chill Music” streaming radio station, and a random San Francisco fire. It’s also done things like label a professor’s retirement video with a Star of David and the label “Jew.”)

Anyway, amid these specific complaints, I thought it was worth highlighting a criticism of YouTube’s system made last year by Mike Caulfield, who runs the Digital Polarization Initiative. Caulfield worried that embedding external information from Encyclopedia Britannica or other trusted sources might end up adding credibility to conspiracy videos rather than reducing it.

I was putting together materials for my online media literacy class and I was about to pull this video, which has half a million views and proposes that AIDS is the “greatest lie of the 21st century.” According to the video, HIV doesn’t cause AIDS, retrovirals do (I think that was the point, I honestly began to tune out).

But then I noticed one of the touches that Google has added recently: a link to a respectable article on the subject of AIDS. This is a technique that has some merit: don’t censor, but show clear links to more authoritative sources that provide better information.

At least that’s what I thought before I saw it in practice. Now I’m not sure. Take a look at what this looks like:

I’m trying to imagine my students parsing this page, and I can’t help but think without a flag to indicate this video is dangerously wrong that students will see the encyclopedic annotation and assume (without reading it of course) that it makes this video more trustworthy. It’s clean looking, it’s got a link to Encyclopedia Britannica, and what my own work with students and what Sam Wineburg’s research has shown is that these features may contribute to a “page gestalt” that causes the students to read this as more authoritative, not less — even if the text at the link directly contradicts the video. It’s quite possible that the easiness on the eyes and the presence of an authoritative link calms the mind, and opens it to the stream of bullshit coming from this guy’s mouth.

Sam Wineburg is the Stanford professor who’s done a number of research projects aimed at unpacking how, exactly, people determine the credibility of a given piece of media. (Research subjects — even Ph.D. historians! — “often fell victim to easily manipulated features of websites, such as official-looking logos and domain names. They read vertically, staying within a website to evaluate its reliability. In contrast, fact checkers read laterally, leaving a site after a quick scan and opening up new browser tabs in order to judge the credibility of the original site. Compared to the other groups, fact checkers arrived at more warranted conclusions in a fraction of the time.”)

Caulfield also noted a Holocaust-denial video to which YouTube had added a Holocaust infobox. “What a person probably needs to know here is not this summary of what the Holocaust was,” he wrote. “The context card here functions, on a brief scan, like a label, and the relevant context of this video is not really the Holocaust, but Holocaust denialism, who promotes it, and why.”

(A commenter on his post notes that some viewers might also come away from these tagged videos thinking that Encyclopedia Britannica had produced the video as well as the infobox information.)

I should note that, searching on Twitter yesterday, I found a lot of conspiracy theorists — flat-earthers, contrail watchers, QAnon fans, Holocaust deniers, Illuminati obsessives — who were very upset about YouTube’s addenda to their videos, viewing it as some mix of censorship and schoolmarmishness. So they, at least, don’t seem to think legitimacy is being added.

Finally, predictably — and just as there would be without YouTube’s 9/11 error — there are bad actors across social media today doing their best to blame Muslims for the fire (there’s zero evidence for that).

NBC News’ Ben Collins tweeted about that fake video at 9:33 p.m. last night. Right now, about 14 hours later, that video is still up on YouTube — and it’s up to 38,000 views. (Don’t ask what the comments are like.)

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Seeking “innovative,” “stable,” and “interested”: How The Markup and CalMatters matched up
Nonprofit news has seen an uptick in mergers, acquisitions, and other consolidations. CalMatters CEO Neil Chase still says “I don’t think we’ve seen enough yet.”
“Objectivity” in journalism is a tricky concept. What could replace it?
“For a long time, ‘objectivity’ packaged together many important ideas about truth and trust. American journalism has disowned that brand without offering a replacement.”
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.