Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
March 31, 2021, 9 a.m.
Audience & Social

We need to know more about political ads. But can transparency be a trap?

Social platforms know transparency matters when it comes to political advertising, but they’re also able to control the terms of that transparency.

As misinformation researchers, we spend a lot of time thinking about online advertising. We dig through ad libraries, monitor platforms’ announcements, and publish investigations into how disinformation agents are bending the rules.

We rely on social media platforms to give us information to do this. But the experience of working within platforms’ parameters has left us with a question: Can transparency be a trap?

In 2017, Facebook announced it was building a searchable archive of U.S. federal election–related ads that would include some spending and targeting data. Various iterations culminated in the Ad Library, which set the standard for ad transparency. Later, Google also began sharing some information about political ads with researchers. Snapchat did the same, and Twitter eventually opted to get rid of political advertising altogether.

By setting policy on it, social platforms have demonstrated they know transparency matters when it comes to political advertising. But they’re also able to control the terms of that transparency. Here are eight big questions that arose when we began scrutinizing the current landscape for advertising transparency.

1. What is obscured by the platforms’ definitions?

What counts as “political” and how is that decided? Election and media law in the U.S. generally defines political ads as those purchased by or on behalf of a candidate for public office, or those relating to a matter of national importance; most major social media platforms use a similar definition.

Facebook calls these “social issue” ads, and defines them as ads messaging about anything “heavily debated, [that] may influence the outcome of an election or result in/relate to existing or proposed legislation.” But who determines what is “heavily debated” or what messaging has the power to influence an election? Advertisements promoting ultrasound services may appear apolitical to most, but if they’re paid for by an anti-abortion organization, they may warrant further scrutiny. On Twitter, political issue ads are banned in the United States, including those from climate advocacy groups. On the other hand, oil companies such as ExxonMobil have been allowed to run ads on the platform. Given the room for interpretation as to what is and isn’t “political,” is the distinction really useful? Should political issue-related ads, such as ads about climate change, count as “political”? And who makes that determination?

As part of a stated effort to protect the U.S. election’s integrity, Facebook did not allow new political ads to run on its platform from October 27, 2020 to March 4, 2021 (with a brief exception made for political ads targeting Georgia’s Senate runoff election in January). But ads about vaccines, ads about election fraud, and ads from politically motivated groups including Prager U, the self-described “leading conservative nonprofit,” all ran during this time. Because of the norms established by the platforms, ads deemed non-political are not held to the same transparency standards, so they remain visible to the public, with less scrutiny from researchers. When platforms aren’t thoughtful with their definitions, powerful issue lobbies are able to exploit loopholes to promote their message.

2. Who gets to access and interpret the transparency data?

There are barriers to entry for every mechanism of transparency the platforms have provided us. A researcher looking to explore Snapchat’s political ads archive must be able to run and interpret a .csv file. Facebook provides more data to researchers with the advanced skills to access their API. There is also no standardization across the platforms’ databases, making meaningful cross-platform comparisons difficult. So while platforms are increasingly giving researchers access to data, should it only be trained researchers who can scrutinize how social media is used to target communities? How could we open this up for all interested people?

The platforms also fully control what data they make public, and how, and it’s not always particularly useful. For example, Facebook provides impression data for political ads, but it is given in ranges. So an ad could be listed as having garnered <1000 impressions, but there’s no way to know if this means 998 impressions or none. Many advocacy organizations have called for more granular data, which platforms could conceivably provide in a standardized format that allows comparison, or in a user-friendly public interface.

3. Can we be confident that pro-transparency measures are effective?

It is crucial to verify whether nominal pro-transparency measures are having a positive effect. For example, many platforms provide some kind of label that indicates who paid for a political ad. This is an effort to increase transparency, but do the labels being used accomplish that? Facebook has been criticized for its lax advertiser verification requirements that allow advertisers to hide their identity behind shell pages. In this example, Students for Life, an anti-abortion advocacy group, is running ads through a page innocuously called “standingwithyou.org.”

4. Will these measures be enforced?

Are the tools built by the platforms suitable to deliver on their stated transparency goals? Researchers at the Online Political Transparency Project were surprised to see that ads containing Joe Biden’s name and image were not being picked up as “political” by Facebook’s AI. They were only able to determine this through setting up their own Ad Observer browser extension. How can we know that the tools offered by platforms are working as they are meant to? Platforms could provide more transparency around the methodology used to create these tools, so researchers could audit them for potential issues or errors.

5. Will they be evenly enforced?

A January 2021 study from Privacy International suggested that heightened transparency standards are unevenly applied around the world. Authors dubbed this the “transparency divide.” The 2020 U.S. presidential election saw unprecedented measures taken by the platforms that far outweighed their efforts elsewhere. Facebook, for example, publicized what was at the time its largest effort to date to, it said, protect the election’s integrity. At the same time, in India’s Bihar state, with a population of around 104 million people, a critical election for the state legislature garnered no blog posts or announcements from Facebook about protecting its integrity. Facebook and Twitter treated the rampant misinformation during these two elections differently, labeling more misleading posts in the U.S. than in India. Transparency measures must meet equal standards globally and be subject to the same levels of enforcement.

6. Is the data reliable?

Researchers have consistently reported errors in the data provided as part of transparency efforts. For example, during the 2019 election in the U.K., thousands of ads went missing from the Facebook ad archive because of an error. Similar complaints were made about Google’s ad archive in the US in 2019. What mechanisms are in place to ensure the data we’re getting is reliable?

There is good reason to be skeptical. In 2019, Facebook agreed to pay $40 million to settle a lawsuit alleging that it had concealed inaccuracies in its video view metrics that led to a massive and misguided industry shift. Media outlets laid off print staffers in favor of investing in video content based on incorrect information. Why should we take Facebook’s data at face value now? Without independent oversight, there is no reason researchers should consider the data from platforms to be reliable.

7. How does transparency direct our attention?

A new tool for transparency auditing is an exciting thing for researchers, and so it is only right that it should become the subject of academic and journalistic research. But what is being missed when we focus on a particular type of information because of the transparency measures behind it?

Take, for example, how the increased access to information around ads marked by social media platforms as “political” has meant that less attention is paid to non-political or commercial advertising. Facebook has given researchers unprecedented access to advertising data around the 2020 U.S. election, possibly the most scrutinized campaign to date. What about elections where that level of oversight was not in place? This concept is neatly captured as a “feature bias” by our colleague Tommy Shane. The features to which we already have access influence our perspective and, therefore, what we study.

8. What’s transparency for?

Kate Dommett, a lecturer at the UK’s University of Sheffield who studies digital campaigning, wrote in Policy and Internet about calls for more transparency in her field of study in the U.K. She found that “despite using common terminology, calls for transparency focus on the disclosure of very different types of information.” Some organizations were calling for financial transparency, others for transparency around targeting data, and only some considered the specifics of how this information would be presented.

Dommett’s research illustrates the pitfalls of demanding transparency for its own sake. When researchers and advocates aren’t specific enough about the outcomes desired, platforms are able to provide an incomplete form of “transparency” as a fig leaf that blunts the political will for positive change. Take, for example, calls for transparency in political spending. If the desired outcome is to monitor the spread of particular messages, and social media companies only offer ad spending data, and not information about impressions and engagement, there are gaps we must seek to fill. Transparency is a tool, not an end in itself; we must reflect carefully on what we want to achieve when we call for it. If we don’t, we’ll keep falling into the trap of false transparency.

Madelyn Webb is an investigative researcher at First Draft. Bethan John is a social media journalist at First Draft. This story originally ran on First Draft’s Footnotes, “a space for new ideas, preliminary research findings, and innovative methodologies.”

Photo by Michael W. May used under a Creative Commons license.

POSTED     March 31, 2021, 9 a.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”