Peace-builder and Ashoka Fellow Helena Puig Larrauri co-founded Construct Up to remodel conflict within the digital age–in places from the U.S. to Iraq. With the exponential growth of viral polarizing content on social media, a key systemic query emerged for her: What if we made platforms pay for the harms they produce? What if we imagined a tax on polarization, akin to a carbon tax? A conversation in regards to the root causes of online polarization, and why platforms ought to be held liable for the negative externalities they cause.
Ashoka Fellow Helena Puig co-founded Construct Up to remodel conflict within the digital age.
Konstanze Frischen: Helena, does technology help or harm democracy?
Helena Puig Larrauri: It depends. There may be great potential for digital technologies to incorporate more people in peace processes and democratic processes. We work on conflict transformation in lots of regions across the globe, and technology can really help include more people. In Yemen, as an example, it may possibly be very difficult to include women’s viewpoints into the peace process. So we worked with the UN to make use of WhatsApp, a quite simple technology, to achieve out to women and have their voices heard, avoiding security and logistical challenges. That is one example of the potential. On the flip side, digital technologies bring about immense challenges – from surveillance to manipulation. And here, our work is to know how digital technologies are impacting conflict escalation, and what could be done to mitigate that.
Frischen: You will have staff working in countries like Yemen, Kenya, Germany and the US. How does it show up when digital media escalates conflict?
Puig Larrauri: Here is an example: We worked with partners in northeast Iraq, analyzing how conversations occur on Facebook, and it quickly showed that what people said and the way they positioned themselves needed to do with how they spoke about their sectarian identity, whether or not they said they were Arabic or Kurdish. But what was happening at a deeper level is that users began to associate an individual’s opinion with their identity – which suggests that ultimately, what matters isn’t a lot what’s being said, but who’s saying it: your individual people, or other people. And it meant that the conversations on Facebook were extremely polarized. And never in a healthy way, but by identity. All of us must have the ability to disagree on issues in a democratic process, in a peace process. But when identities or groups start opposing one another, that is what we call affective polarization. And what which means is that irrespective of what you truly say, I’ll disagree with you due to group that you simply belong to. Or, the flip side, irrespective of what you say, I’ll agree with you due to group that you simply belong to. When a debate is at that state, you then’re in a situation where conflict may be very prone to be destructive. And escalate to violence.
Frischen: Are you saying social media makes your work harder since it drives affective polarization?
Puig Larrauri: Yes, it actually looks like the percentages are stacked against our work. Offline, there could also be space, but online, it often looks like there is no way that we are able to start a peaceful conversation. I remember a conversation with the leader of our work in Africa, Caleb. He said to me throughout the recent election cycle in Kenya “once I walk the streets, I feel like that is going to be a peaceful election. But once I read social media, it’s a war zone.” I remember this because even for us, who’re professionals within the space, it’s unsettling.
Frischen: The usual way for platforms to react to hate speech is content moderation — detecting it, labeling it, depending on the jurisdiction, perhaps removing it. You say that’s not enough. Why?
Puig Larrauri: Content moderation helps in very specific situations – it helps with hate speech, which is in some ways the tip of the iceberg. But affective polarization is usually expressed in other ways, for instance through fear. Fear speech isn’t similar to hate speech. It may possibly’t be so easily identified. It probably won’t violate the terms of service. Yet we all know that fear speech could be used to incite violence. Nevertheless it would not fall foul of the content moderation guidelines of platforms. That’s only one example, the purpose is that content moderation will only ever catch a small a part of the content that’s amplifying divisions. Maria Ressa, the Nobel Prize Winner and Filipino journalist, said that recently so well. She said something along the lines that the difficulty with content moderation is it’s such as you fetch a cup of water from a polluted river, clean the water, but then put it back into the river. So I say we want to construct a water filtration plant.
Frischen: Let’s speak about that – the foundation cause. What has that underlying architecture of social media platforms to do with the proliferation of polarization?
Puig Larrauri: There’s actually two explanation why polarization thrives on social media. One is that it invites people to control others and to deploy harassment on mass. Troll armies, Cambridge Analytica – we’ve all heard these stories, let’s put that aside for a moment. The opposite aspect, which I feel deserves lots more attention, is the best way by which social media algorithms are built: They’re trying to serve you up with content that’s engaging. And we all know that affective polarizing content, that positions groups against one another, may be very emotive, and really engaging. Consequently, the algorithms serve it up more. So what which means is that social media platforms provide incentives to provide content that’s polarizing, because it’ll be more engaging, which is incentivizing people to provide more content like that, which makes it more engaging, and so forth. It is a vicious circle.
Frischen: So the spread of divisive content is nearly a side effect of this business model that makes money off engaging content.
Puig Larrauri: Yes, that is the best way that social media platforms are designed in the mean time: to interact individuals with content, any type of content, we do not care what that content is, unless it’s hate speech or something else that violates a narrow policy, right, by which case, we are going to take it down, but usually, what we wish is more engagement on anything. And that’s built into their business model. More engagement allows them to sell more ads, it allows them to gather more data. They need people to spend more time on the platform. So engagement is the important thing metric. It isn’t the one metric, nevertheless it’s the important thing metric that algorithms are optimizing for.
Frischen: What framework could force social media corporations to alter this model?
Puig Larrauri: Great query, but to know what I’m about to propose, let me say first that the important thing to know is that social media is changing the best way that we understand ourselves and other groups. It’s creating divisions in society, and amplifying politically existing divisions. That is the difference between specializing in hate speech, and specializing in this concept of polarization. Hate speech and harassment is about what the person experience of being on social media is, which may be very vital. But after we take into consideration polarization, we’re talking in regards to the impact social media is having on society as an entire, no matter whether I’m being personally harassed. I’m still being impacted by the proven fact that I’m living in a more polarized society. It’s a societal negative externality. There’s something that affects all of us, no matter whether we’re individually affected by something.
Frischen: Negative externality is an economics term that – I’m simplifying – describes that in a production or consumption process, there’s a value being generated, a negative impact, which isn’t captured by the market mechanisms, and it’s harming another person.
Puig Larrauri: Yes, and the important thing here is that that cost isn’t included within the production costs. Let’s take air pollution. Traditionally, in industrial capitalism, people were producing things like cars and machines, within the strategy of which in addition they produced environmental pollution. But first, no one needed to pay for the pollution. It was as if that cost didn’t exist, though it was actually a negative cost to society, nevertheless it just wasn’t being priced by the market. Something very similar is occurring with social media platforms without delay. Their profit model is not to create polarization, they only have an incentive to create content that’s engaging, no matter whether it’s polarizing or not, but polarization happens as a by-product, and there is no incentive to wash it up, similar to there was no incentive to wash up pollution. And that is why polarization is a negative externality of this platform business model.
Frischen: And what are you proposing we do about that?
Puig Larrauri: Make social media corporations pay for it. By bringing the societal pollution they cause into the market mechanism. That’s in effect what we did with environmental pollution – we said it ought to be taxed, there ought to be carbon taxes or another mechanism like cap and trade that make corporations pay for the negative externality they create. And for that to occur, we needed to measure things like CO2 output, or carbon footprints. So my query is: Could we do something similar with polarization? Could we are saying that social media platforms or perhaps any platform that’s driven by an algorithm ought to be taxed for their polarization footprint?
Frischen: Taxation of polarization is such a creative, novel technique to take into consideration forcing platforms to alter their business model. I need to acknowledge there are others on the market – within the U.S., there’s a discussion in regards to the reform of section 230 that currently shields social media platforms from liability, and….
Puig Larrauri: Yes, and there is also a really big debate, which I’m very supportive of, and a part of, about how to design social media platforms differently by making algorithms optimize for something aside from engagement, something that is perhaps less polluting, and produce less polarization. That is an incredibly vital debate. The query I actually have, nonetheless, is how will we incentivize corporations to really take that on? How will we incentivize them to say, Yes, I’ll make those changes, I’m not going to make use of this easy engagement metric anymore, I’ll tackle these design changes within the underlying architecture. And I feel the technique to do this is to essentially provide a financial disincentive to not doing it, which is why I’m so excited by this concept of a tax.
Frischen: How would you ensure taxing content isn’t seen as undermining protections of free speech? An enormous argument, especially within the U.S., where you possibly can spread disinformation and hate speech under this umbrella.
Puig Larrauri: I do not think that a polarization footprint necessarily needs to take a look at speech. It may possibly take a look at metrics that need to do with the design of the platform. It may possibly take a look at, for instance, the connection between belonging to a gaggle and only seeing certain forms of content. So it doesn’t must get into problems with hate speech or free speech and the controversy around censorship that comes with that. It may possibly look simply at design decisions around engagement. As I said before, I actually don’t think that content moderation and censorship is what is going on to work particularly well to handle polarization on platforms. What we now must do is to set to work to measure this polarization footprint, and find the proper metrics that could be applied across platforms.
For more follow Helena Puig and Build Up.