You are currently viewing Who Should Decide What Gets Censored?

Who Should Decide What Gets Censored?

Social media companies have taken heat for as long as they’ve been around. They’re accused of invading privacy and selling our data. They’re implicated in artificially boosting and suppressing certain points of view. And recently, some changes to how Twitter moderates people have put the spotlight on another core problem: how to decide what gets censored.

To an extent, social media companies are victims of their own wild success. When they launched in the early 2000s, a small support team could adequately serve thousands of people. A billion is different. It’s a whole new ballgame to maintain a platform for that many people. But even if Facebook employed a thousand full-time customer support workers (they don’t), each worker would have to support about three million people given Facebook’s current active user count 1.

Another complication is that even if they could support everyone, there’s no consensus on how to manage such a large group of voices. Huge swaths of people on social media disagree on what should and shouldn’t be censored. What we end up with is moderation that makes the wrong call most of the time. The “Twitter Files” leak of internal Twitter executive emails illustrates just how censorship happens. In a tweet thread, journalist Matt Taibbi (@mtaibbi) uncovers the story of Twitter’s descent into over-moderation, “Some of the first tools for controlling speech were designed to combat the likes of spam and financial fraudsters. / Slowly, over time, Twitter staff and executives began to find more and more uses for these tools. Outsiders began petitioning the company to manipulate speech as well: first a little, then more often, then constantly.” 2

Should the leaders of multi-billion user empires be manipulating speech? On the one hand, it’s their company, and we’re free to log out and never come back. However, hosting billions of users gives these companies something of a monopoly on public discourse. We live in a world where single tweets have set off protests throughout the world. And when a company decides to bury a story, or promote an agenda, they’re acting in their best interest, not in ours.

Meta and Twitter are two notorious examples of companies that host content and try (often unsuccessfully) to moderate and verify what gets posted. This moderation extends to their algorithms that decide what appears on our feeds and what doesn’t. It doesn’t have to be this way; there is another option. The hosting of content should be separate from the moderation and filtering of it, with the exception of illegal content. However, in some cases determining whether something is illegal is unclear. Twitter censored posts about the infamous Hunter Biden laptop in 2020 because of supposed law enforcement claims of illegal hacking. Taibbi references this later in his same thread,  “Although several [Twitter] sources recalled hearing about a ‘general’ warning from federal law enforcement that summer about possible foreign hacks, there’s no evidence – that I’ve seen – of any government involvement in the laptop story.” 3 In the end, Twitter decided to censor a story using the reasoning that it was illegal, even though it wasn’t.

The first step to separating the hosting of content from moderation of it is to make social media companies accountable for correctly identifying and removing illegal content (and only illegal content). By eliminating any other kind of moderation, the floodgates will open, and our feeds will inevitably overflow with spam, fake news, conspiracy theories, and other filth. The next step fixes that.

To quell the cacophonous cesspool of social media posts, we will all have to use moderation filters. The difference here is that we choose our filter, it’s not decided for us. Independent filters would be developed that operate at the behest of the user, not the social media company. People would select from a marketplace of filters, with multiple organizations developing filters for all types of preferences. There would be filters that block spammy social media posts (and with a marketplace, there could be competition among companies to develop the best spam filter). There would be filters for people with left-leaning preferences, and others for right-leaning preferences. Filters for children that block inappropriate content, even filters that block everything except for cat pictures (likely one of the best filters). No one sees exactly the same feed, because filters are tuned to the individual. It would also be possible to combine multiple filters, giving people the ability to precisely tune into the public feed on their terms.

For content creators looking to grow their audience, it would be in their interest to produce content that abides by the rules of most filters. However, if one filter bans them, they are not necessarily censored by others. Everyone gets to make their own call on what kinds of things they want to see and what they don’t want to see. If a filter is overly restrictive for people, they’ll just migrate to a filter that better suits their needs.

Host/moderator separation doesn’t perfect social media by any means. But it does empower us to see what we choose to see (and only what we choose to see). There’s no denying that many of the problems with social media originate from the people who use it. These problems are exacerbated by the select few who manipulate what gets shown in a deliberate effort to craft the public narrative. What this model does is chip away at the immense amount of influence that social media platforms like Meta and Twitter have. And with their track record, perhaps it’s best if we keep that power out of their hands.

Jared Leshin is an author, CMO at METADEV, and founder of hypersuade.co — a Web3-immersed creative marketing agency. Read his book, Advertising and the Nature of Reality, available on Amazon.

1 https://datareportal.com/essential-facebook-stats

2 https://twitter.com/mtaibbi/status/1598824834334687236 (Thread 6, 7)

3 https://twitter.com/mtaibbi/status/1598824834334687236 (Thread 22)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.