Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
Oct. 31, 2023, 9:30 a.m.

Social media algorithms can be redesigned to bridge divides — here’s how

“It falls to both the tech companies that built these systems and an engaged public to create technologies designed for social cohesion.”

Social media platforms have been implicated in conflicts of all scales, from urban gun violence to the storming of the U.S. Capitol building on January 6 and civil war in South Sudan. Scientifically, it is difficult to tell how much social media can be blamed for one-off incidents.

But in much the way that climate change increases the risk of extreme weather, evidence suggests that current algorithms (which mostly optimize for engagement) raise the political “temperature” by disproportionately surfacing inflammatory content. This may make people angrier, increasing the risk that social differences escalate to violence.

But what if we redesigned social media to bridge divides? “Bridging-based ranking” is an alternative kind of algorithm for ranking content in social media feeds that explicitly aims to build mutual understanding and trust across differing perspectives.

The core logic of bridging-based ranking has already been used on Facebook and X (formerly known as Twitter), albeit not in the main feed. It is also used in Polis, an online platform for collecting public input, used by several governments to inform policymaking on polarized topics.

There are many open questions, but evidence from existing uses of bridging-based ranking suggests that changes to algorithms may reduce partisan animosity and improve the quality and inclusiveness of online interactions.

People are increasingly looking for alternative algorithms. Regulators in the EU and new platforms such as Bluesky are giving users choice regarding which algorithm determines what they see, and recent large-scale experiments on Facebook have tested different options.

If we care about social cohesion, then during this period of “shopping around” we need to seriously consider alternatives such as bridging.

How it works

Current engagement-based algorithms make predictions about which posts are most likely to generate clicks, likes, shares or views — and use these predictions to rank the most engaging content at the top of your feed. This tends to amplify the most polarising voices, because divisive perspectives are very engaging.

Bridging-based ranking uses a different set of signals to determine which content gets ranked highly. One approach is to increase the rank of content that receives positive feedback from people who normally disagree. This creates an incentive for content producers to be mindful of how their content will land with “the other side.”

Two stylized social media feeds with different rankings, depending on how different patterns of reactions (likes, angry reactions, etc.) are weighted. Algorithms based on engagement (left) elevate posts that prompt divisive reactions. Bridging (right) elevates posts that diverse groups agree on.

Among the internal Facebook documents leaked by whistleblower Frances Haugen in 2021, there is evidence that Facebook tested this approach for ranking comments.

Comments with positive engagement from diverse audiences were found to be of higher quality, and “much less likely” to be reported for bullying, hate or inciting violence. A similar strategy is used in Community Notes, a crowd-sourced fact-checking feature on X, to identify notes that are helpful to people on both sides of politics.

This pattern of “diverse positive feedback” is the most widely implemented approach to bridging. Others include lowering the ranking of content that promotes partisan violence, or using surveys to shape algorithms so that they increase the ranking of content according to how it makes users feel in the long term, rather than the short term.

Conflict is an important part of society, and in many cases, a key driver of political and social change. The goal of bridging is not to eliminate conflict or disagreement, but to promote constructive forms of conflict.

This is known as conflict transformation. Professional mediators, facilitators and “peacebuilders”, who work with opposing groups, have a detailed understanding of how conflicts escalate. They also know how to structure communication between opposing groups in ways that build mutual understanding and trust.

Research on bridging-based ranking can draw on this, taking insights from conflict management in the physical world and translating them into digital systems.

For example, facilitating contact between people from rival groups in “opt-in”, non-threatening settings can reduce prejudice, and we can design social platforms to create these conditions online.

Why should big tech adopt this?

Firms such as Meta have built their fortune on the “attention economy” and content which promotes short-term engagement, and hence revenue.

We simply don’t yet know the extent to which the goals of bridging and engagement are in tension. If you talk to people who work at social media platforms, they will tell you that when well-intended changes to the algorithm are tested, user engagement sometimes drops initially, but then slowly rebounds over time, ultimately ending up with more engagement.

The problem is, platforms normally get cold feet and cancel experiments before they can observe such long-term benefits. Evidence we do have from leaked Facebook papers suggests that incorporating bridging improves the user experience.

Bridging-based ranking might also have benefits beyond engagement. By reducing toxicity and content that violates community guidelines, it would likely reduce the need for costly content moderation.

Demonstrating a willingness to make their algorithms less divisive would also build goodwill among regulators, reducing the risk of reputational and legal damage. For example, Facebook has been heavily criticized for allegedly facilitating incitements to violence in Myanmar, Sri Lanka, and Ethiopia.

It has subsequently faced lawsuits from victims and communities, who have sought up to £150 billion in damages.

Questions and challenges

Important questions around bridging-based ranking remain, and we set out many of these in a recent paper published with the Knight First Amendment Institute, which publishes original scholarship and policy papers relating to the defence of freedoms of speech and the press in the digital age.

Which divides should be bridged? Are there unintended consequences — for example, amplifying mainstream views at the expense of minority viewpoints? How can decisions about the design of mass communication technologies be made democratically?

Bridging is not a panacea. There is only so much algorithmic changes can do to address societal conflict, which is a result of complex factors such as inequality. But by recognising that digital platforms are reshaping society, we have an obligation to guide that process in an ethical, humanistic direction that brings out the best in us.

It falls to both the tech companies that built these systems and an engaged public to create technologies designed for social cohesion. With care, wisdom and democratic oversight, we can foster online communities that reflect our better sides. But we have to make that choice.

Luke Thorburn is a PhD candidate in Safe and Trusted AI at King’s College London. Aviv Ovadya is an affiliate at the Berkman Klein Center for Internet & Society at Harvard University. This article is republished from The Conversation under a Creative Commons license.The Conversation

POSTED     Oct. 31, 2023, 9:30 a.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”