People keep pissing in the pool.
There’s a deep hypocrisy at the heart of social media. Companies built the platforms. They outsourced making those platforms worthwhile to us, the users. But they didn’t provide us with the tools to moderate those platforms and they’ve walked away from the responsibility of doing it themselves.
Facebook, Twitter, Instagram, and the like built great swimming pools. They invited everyone over and told people to, somehow, make their own water and their own fun. And people did. They made their own water and built their own games and they had fun. Then people started pissing in the pool. They pissed at ever great volumes and viscosity.
And the people who made the pools didn’t do anything about it.1
No tweeter is an island
Here’s an under-appreciated reality of social platforms: every user is dependent on every other user. Twitter, Facebook, and the like may treat you as the end user — they ask you what you’re doing, your newsfeed is yours and yours alone, populated by a myriad of things from elsewhere for your consumption and, if you deign it so, a like or a comment. But, without everyone else, you’d have nothing.
My enjoyment of Twitter depends exclusively on the people I’ve chosen to follow. Their thoughts, their tastes define my experience. That extends to who my followers choose to follow, too — I see all their retweets and quote tweets.
Every social media platform, like society as a whole, is a cascading layers of interdependency. But this isn’t necessarily reflected in the design of these platforms (despite presenting you with a million faces, most isolate you into a narrow feed) or, more importantly, their rules and approaches to moderation, which focus on governing the behaviour of individuals and how individuals treat each other.
Social media platforms are an ecosystem. Each individual person matters. You don’t feel that when these companies talk about moderation, though. When someone abuses someone else, for example, there’s no sense of what that abuse means in the broader context of the ecosystem — it’s about one person acting on another. By focusing on that, you’re missing the full picture.
Here’s how Judith Butler describes the effects of violence in society in her book The force of nonviolence (which I’ll be quoting throughout this article):
It is not simply that an individual abrogates his or her conscience or deeply held principles in acting violently, but that certain “ties“ required for social life, that is, the life of a social creature, are imperiled by violence.
A social media platform’s value depends on people. Every time one user attacks another, they’re not only attacking the victim. They’re attacking the point and vibrancy of the site itself.
Soylent Tweet is people
Here’s the problem social media platforms need to solve: they need to convince people that every person, at their core, has equal value. They need to promote, both implicitly and explicitly, equality.
Abuse on social has a few goals but it usually boils down to silencing someone. You’re trying to control the discussion and the person; you want them to stop talking or to leave permanently.
Every platform worth its salt already prohibits violence, either in form of incitement or direct abuse. But those rules only protect people who are thought to have value to the platform.
Look at this way: the ultimate end point of abuse is driving someone away from the site. You’ve made it untenable to stay or you’ve convinced them so thoroughly that you don’t belong. A rule saying “Don’t abuse people“ will only stop you if you think the people being abused belong. You’ll only stop abusing someone if they’ll be missed if they leave.
Butler frames this as grievability: who will be grieved if they die. Those deemed to be grievable are protected from violence. Those who aren’t grievable? Not so much. They’re already as good as dead so no harm done if they die.
A life has to be grievable — that is, its loss has to be conceptualisable as a loss — for an interdiction against violence and destruction to include life among those living beings to be safeguarded from violence.
This helps us understand who social companies value. They tell you.
Donald Trump is the obvious example. He flouts Twitter’s rules on a semi-regular basis. He creates an environment of violence by targeting different social groups and legitimises the abuse of those groups.
And he hasn’t been banned. Not only hasn’t he not been banned, Twitter reinforced their rules around telling people to die because he was on the receiving end.
The distinction between populations that are worth violently defending and those that are not implies that some lives are simply considered more valuable than others.
Donald Trump is worth more than other people to Twitter. So he’s protected.
This approach — of some people being worth grieving and others not — trickles down throughout social. Think of all the people whose abuse isn’t worth removing or the abusers who haven’t done enough to warrant recrimination. It’s clear who’s absence would be missed most.
This distinction was built into social from day one:
After all, if a life, from the start, is regarded as grievable, then every precaution will be taken to preserve and to safeguard that life against harm and destruction.
Social platforms like Facebook, Twitter, and Instagram are stupendously big. It would take a huge amount of effort and resourcing to effectively moderation. But that’s only true because they were built with a laissez-faire rule set from the start. That replicates the status quo of wider society — a place where some people are more grievable than others.
Social platforms won’t be able to grapple with abuse and violence unless they rebuild from the ground up under the assumption that everyone is grievable. That’s the challenge.
Dehumanisation is a feature, not a bug
Now, to be fair to social companies, they’re swimming against the tide here.2 They’re operating in a society that has spent a lot of time and money dehumanising whole groups of people.
How many times have media outlets or leaders implied (or just straight up said) that immigrants are monsters on their way to destroy “our way of life“? Take any marginalised group anywhere and they’ve been called less-than by a power that wants to control, subvert, or destroy them.
It’s how people justify violence. It’s woven into the very fabric of society and how people debate.
That’s the environment in which social platforms exist. And the way they present people, you know, real people, doesn’t help. Every person on, say, Facebook is presented as a piece of content. They arrive to you as a small profile picture and a mix of words and images. They’re pixels. All of their depth and humanness are collapsed into a thing presented and served to you to consume. They’re dehumanised by design. Who cares if you attack them? They’re not even a person. They’re just a piece of content.
There are a lot of factors at play here. There’s the wider world where dehumanisation is an everyday rhetorical play. There’s the layout and design of social platforms. There’s a complete lack of awareness of how dependent we are on every other user of social.
This all combines to make it unclear what’s at stake when abuse and violence runs rife.
Without an understanding of the conditions of life and livability, and their relative difference, we can know neither what violence destroys nor why we should care.
A victim of optimism
We can spend all day diagnosing the root of the problems we see on social media. But one of the causes is how optimistic the companies building these platforms were about the value of “connection“ or “connectedness“. That is, they assume that the more connected the world is, the better.
The problem: they haven’t reckoned with the fact that a core part of connection is the possibility of negativity. Yes, we’re all dependent on one another but that dependency is defined by the potential for hostility:
That relationality is, of course, defined in part by negativity, that is, by conflict, anger, and aggression. The destructive potential of human relations does not deny all relationality, and relational perspectives cannot evade the persistence of this potential or actual destruction of social ties. As a result, relationality is not by itself a good thing, a sign of connectedness, an ethical norm to be posited over and against destruction: rather, relationality is a vexed and ambivalent field in which the question of ethical obligation has to be worked out in light of a persistent and constitutive destructive potential.
That potential is never going away. The challenge is to build a system where it’s acknowledged, understood, and channeled:
Indeed, when the world presents as a force field of violence, the task of nonviolence is to find ways of living and acting in that world such that violence is checked or ameliorated, or its direction turned, precisely at moments when it seems to saturate that world and offer no way out.
Let’s add another layer of complexity. We’re all dependent on each other. As such, we’re dependent on the structures that bring us together. A structure that doesn’t acknowledge how it facilitates violence on a basic level, or even react well to the violence it facilitates, will make us feel uneasy.
We’ll feel vulnerable.
We are never simply vulnerable, but always vulnerable to a situation, a person, a social structure, something upon which we rely and in relation to which we are exposed. Perhaps we can say that we are vulnerable to those environmental and social structures that make our lives possible, and that when they falter, so do we. To be dependent implies vulnerability: one is vulnerable to the social structure upon which one depends, so if the structure fails, one is exposed to a precarious condition.
This precariousness can lead to a whole lot of violence, especially in a time and space where people are being systematically dehumanised and thus okay to attack. Leaders have, time and time again, harnessed a sense of vulnerability to direct a population against supposed enemies.
People feel exposed. Leaders direct that feeling against different groups of people to gain power. Violence and abuse follow.
Social platforms were built with the assumption that more connection is a good thing. They didn’t reckon with the realities of connection and, as such, they didn’t built systems robust enough to manage connections. That weakness left people feeling exposed, which itself can encourage yet more violence.
To top it off, their moderation approaches explicitly and implicitly tell us who they value more and who’s worth protecting from violence. And it’s rarely those on the receiving end.
Time for a rebuild
I don’t have a solution here. Not a concrete one, anyway. Social platforms need to be rebuilt if they want to do away with, or even minimise, abuse and violence. They’re incapable of dealing with it as is. (You could say the same about society as a whole, if you want.)
The fix depends on what we want: do we want social media platforms that are the same as they are now but more welcoming to vulnerable groups, more open to discussion, and less dehumanising? Or do we want platforms that radically re-imagine what a world without abuse or violence could be?
If we want the latter, it’s not enough to say “Just ban the abusers“ (assuming that we accept that forcibly removing someone from a platform if a form of violence3). Bans make sense in our current social platforms but no amount of violence, no matter how morally just you can make it seem, can create a world without violence:
When any of us commit acts of violence, we are, in and through those acts, building a more violent world… Quite apart from assiduous efforts to restrict the use of violence as means rather than an end, the actualisation of violence as a means can inadvertently become its own end, producting new violence, producing violence anew, reiterating the license, and licensing further violence.
No amount of driving people off of a social media platform for “just“ reasons will stop people from driving people away from those platforms through abuse. The former just validates the latter as the ultimate way of controlling the platform.4
Social platforms aren’t special. They’re a reflection of our own societies and a reflection of their own assumptions and tools. If they want to build spaces without abuse and violence, they can’t use abuse or violence to get there. They need a commitment to equality.
Most forms of violence are committed to inequality, whether or not that commitment is explicitly thematised.
It’s hard to imagine what that would look like. There’s so much violence in the world that it’s hard to see we’d get to a world without violence without using violence to get there. That’s the trick, really. That’s what keeps us stuck.
How do you stop abuse on social, the goal of which is removing people from the platform, without forcibly removing the abusers? Maybe my wanting to find an answer to that question is my own misguided optimism.
Social companies built swimming pools. They invited people over and told them to make their own water. Somehow, against all logic, they did. They were optimistic. But then people started pissing in the pool and social companies didn’t do anything about it.
It’s hard to get piss out of water once it’s in there. Maybe it’s time to just build a new pool.
Credit to my lovely partner for the pool analogy. ↩
Not least of all because they’re tech companies — not moral philosophy or moral ethics companies. ↩
There’s a difference between driving someone off, say, Twitter through abuse and banning someone for breaking rules. But the the latter is the equivalent of state-sanctioned violence. ↩
A counterpoint: there are some amazing and lovely online communities that have been forged through moderators with hair-trigger bans. They generally, in my experience, pop up in the comment sections of websites with a niche, or at least narrow, focus. Draconian moderation policies can lead to vibrant and robust communities. The question becomes one of scale: it doesn’t. And it mightn’t work when you add layer upon layer of wildly divergent view points. It does open up a secondary question, though: are super-massive social media platforms practical? Or even desirable? Probably not, no. ↩