The root cause of online hate is the business model of Big Tech
Social media can be a toxic place. Pinning anonymity as the problem, our Prime Minister went on the record saying social media was ‘filled with cowards who go anonymously onto social media and vilify people and harass them and bully them’.
Zooming out beyond the view of the individual, we can see that the business model of Big Tech i s at the root of this problem.
Put simply, social media companies promote, amplify and profit from hate. Algorithms are designed to amplify the most sensational and extreme content, because that is what keeps us glued to our phones and scrolling through platforms so that more value can be extracted from us.
As France Haugen will remind Australian law makers today, Facebook knows that content that elicits an extreme reaction is more likely to get a click, a comment or a reshare, and is therefore more profitable for the company.
Facebook and Instagram routinely promote content containing misogynistic and racist abuse. And yet, Facebook continues to use “engagement based ranking” algorithms that guide users towards this more extreme content, away from nuanced discussions, despite knowing how dangerous this is. Facebook’s decision to, in Haugen’s words, put “astronomical profits before people” has devastating consequences for our society.
Calls to end anonymity unwittingly expands the gaze of Big Tech into our lives, by requiring identification and verification on platforms. Due to voracious data harvesting practices, the gaze of Big Tech is already all pervasive.
Facebook already knows your phone’s unique ID number, the IP addresses most associated with your logins, and potentially your precise GPS location as well. And as Digital Rights Watch argue, identification systems disproportionately harm marginalised groups: real name policies can lead to real world harms.
Women, people of colour, and people with disability are all afforded valuable and legitimately needed protection when they can move online anonymously. Take Twitter, for example. Amnesty International found women routinely faced gendered violence and abuse on the platform.
We need to pursue policies that are evidence based. Research in fact suggests that people are actually more aggressive online when using their real names than when they are not.
We can see this from a simple scan of the verified Twitter accounts that have been suspended, or a glance at some of the best comments from Nextdoor (a hyper local app that verifies your address and connects you with your actual neighbours). People are prepared to be hateful online even if their identity is known.
If we want to go to regulate the root of the problem, we need to regulate Big Tech. Downstream interventions, such as ending anonymity, won’t fix the systemic problem. But there are a number of useful upstream interventions the government could make, to begin to bring Big Tech into line.
For starters, Australia urgently needs to update its 30-year-old data protection laws and to develop a strong data code to specifically protect children. We shouldn’t leave it up to tech companies to decide what they can and can’t do with our data - we need some ground rules so they’re compelled to prioritise privacy and children’s rights.
If we’re serious about tackling misinformation and disinformation then we should scrap the failing voluntary industry code with proper laws, and resource our regulators to adequately enforce it as well as the laws we already have.
Ultimately, we need to compel social media platforms to operate in line with public expectations. To do this we must hold them accountable for the harm they cause, not the anonymous users who take advantage of the unregulated space.
By Rys Farthing and Dhakshayini Sooriyakumaran