This article was published on April 22, 2022

An Elon Musk-owned Twitter could restrict free speech rather than promote it

Twitter doesn't need another billionaire — it needs better moderation


An Elon Musk-owned Twitter could restrict free speech rather than promote it

In mid-April, Elon Musk made public his desire to acquire Twitter, make it a private company, and overhaul its moderation policies. Citing ideals of free speech, Musk claimed that “Twitter has become kind of the de facto town square, so it’s just really important that people have the, both the reality and the perception that they are able to speak freely within the bounds of the law.”

While making Twitter free for all “within the bounds of the law” seems like a way to ensure free speech in theory, in practice, this action would actually serve to suppress the speech of Twitter’s most vulnerable users.

CBC’s The National looks at Elon Musk’s attempt at a hostile takeover of Twitter.

My team’s research into online harassment shows that when platforms fail to moderate effectively, the most marginalized people may withdraw from posting to social media as a way to keep themselves safe.

Withdrawal responses

In various research projects since 2018, we have interviewed scholars who have experienced online harassment, surveyed academics about their experiences with harassment, conducted in-depth reviews of literature detailing how knowledge workers experience online harassment, and reached out to institutions that employ knowledge workers who experience online harassment.

Overwhelmingly, throughout our various projects, we’ve noticed some common themes:

  • Individuals are targeted for online harassment on platforms like Twitter simply because they are women or members of a minority group (racialized, gender non-conforming, disabled or otherwise marginalized). The topics people post about matter less than their identities in predicting the intensity of online harassment people are subjected to.
  • Men who experience online harassment, often experience a different type of harassment than women or marginalized people. Women, for example, tend to experience more sexualized harassment, such as rape threats.
  • When people experience harassment, they seek support from their organizations, social media platforms and law enforcement, but often find the support they receive is insufficient.
  • When people do not receive adequate support from their organizations, social media platforms and law enforcement, they adopt strategies to protect themselves, including withdrawing from social media.

This last point is important, because our data shows that there is a very real risk of losing ideas in the unmoderated Twitter space that Musk says he wants to build in the name of free speech.

Or in other words, what Musk is proposing would likely make speech on Twitter less free than it is now, because people who cannot rely on social media platforms to protect them from online harassment tend to leave the platform when the consequences of online harassment become psychologically or socially destructive.

Arenas for debate

Political economist John Stuart Mill famously wrote about the marketplace of ideas, suggesting that in an environment where ideas can be debated, the best ones will rise to the top. This is often used to justify opinions that social media platforms like Twitter should do away with moderation in order to encourage constructive debate.

This implies that bad ideas should be taken care of by a sort of invisible hand, in which people will only share and engage with the best content on Twitter, and the toxic content will be a small price to pay for a thriving online public sphere.

The assumption that good ideas would edge out the bad ones is both counter to Mill’s original writing, and the actual lived experience of posting to social media for people in minority groups.

Mill advocated that minority ideas be given artificial preference in order to encourage constructive debate on a wide range of topics in the public interest. Importantly, this means that moderation of online harassment is key to a functioning marketplace of ideas.

Regulation of harassment

The idea that we need some sort of online regulation of harassing speech is borne out by our research. Our research participants repeatedly told us that the consequences of online harassment were extremely damaging. These consequences ranged from burnout or inability to complete their work, to emotional and psychological trauma, or even social isolation.

When targets of harassment experienced these outcomes, they often also experienced economic impacts, such as issues with career progression after being unable to complete work. Many of our participants tried reporting the harassment to social media platforms. If the support they received from the platform was dismissive or unhelpful, they felt less likely to engage in the future.

When people disengage from Twitter due to widespread harassment, we lose those voices from the very online public sphere that Musk says he wants to foster. In practice, this means that women and marginalized groups are most likely to be the people who are excluded from Musk’s free speech playground.

Given that our research participants have told us that they already feel Twitter’s approach to online harassment is limited at best, I would suggest that if we really want a marketplace of ideas on Twitter, we need more moderation, not less. For this reason, I’m happy that the Twitter Board of Directors is attempting to resist Musk’s hostile takeover.The Conversation

This article by Jaigris Hodson, Associate Professor of Interdisciplinary Studies, Royal Roads University, is republished from The Conversation under a Creative Commons license. Read the original article.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with