Nieman Foundation at Harvard
HOME
          
LATEST STORY
How journalists can avoid amplifying misinformation in their stories
ABOUT                    SUBSCRIBE
Sept. 25, 2013, 2:35 p.m.
LINK: civic.mit.edu  ➚   |   Posted by: Caroline O'Donovan   |   September 25, 2013

MIT’s Center for Civic Media has published a new research project inspired by a desire to find out how much video-watching behavior was similar or different in countries around the globe. Using data from YouTube, Ed Platt, Rahul Bhargava, and Ethan Zuckerman built What We Watch, a tool for viewing popular videos by country and across countries.

The music video for “Roar” by Katy Perry offers evidence that some videos find truly global audiences — the video is has trended from Peru to the Philippines, and one of the top videos in Turkey and Saudi Arabia. Other videos find regional, but not global audiences – take P-Square’s “Personally”, which was in the top 10 in Nigeria for 17% of dates we tracked, and is popular in Ghana, Uganda, Kenya, and Senegal… but no where outside of sub-Saharan Africa. And some videos never leave home: Brazil’s top trending video, a humorous ad for a phone company that requires no translation, doesn’t show up on the top charts for any other country.

The next step, writes Zuckerman, is to track how the videos spread to get a better understanding of international digital media networks. The code’s on GitHub.

Show tags Show comments / Leave a comment
 
Join the 50,000 who get the freshest future-of-journalism news in our daily email.
How journalists can avoid amplifying misinformation in their stories
We need new tools to ensure visual media travels in secure ways that keep us safer online. Overlays are among these tools.
How China used the media to spread its Covid narrative — and win friends around the world
China’s image plummeted in North America, but over half of 50 nations surveyed at the end of 2020 reported coverage of China had become more positive in their national media since the onset of the pandemic.
From deepfakes to TikTok filters: How do you label AI content?
How should we label AI media in ways that people understand? And how might the labels backfire?