Algorithms as a weapon against women: How YouTube lures boys and young men into the ‘Manosphere’

  • Online Safety
Decorative image

Executive Summary

This research documents how YouTube’s algorithms contribute to promoting misogynistic, anti-feminist and other extremist content to Australian boys and young men. Using experimental accounts this research tracks the content that YouTube, and their new ‘YouTube Shorts’ feature, routinely recommends to boys and young men.

This short-term, qualitative study involved analysing algorithmic recommendations and trajectories provided to 10 experimental accounts:

  • 4 boys under 18, who followed content at different points along the ideological spectrum, from more mainstream to extreme sources and influencers
  • 4 young men over 18, who followed content at different points along the ideological spectrum, from more mainstream to extreme sources and influencers
  • 2 blank control accounts that did not deliberately seek out or engage with any particular content, but instead followed the videos offered by Youtube’s recommendations.

As the study progressed, each account was recommended videos with messages antagonistic towards women and feminism. Following the recommendations and viewing and liking the suggested content resulted in more overtly misogynist ‘Manosphere’ and ‘incel’ content being recommended.

The study found that while the general Youtube interface recommended broadly similar content to topics the accounts originally engaged with, the new shorter video feature, called YouTube Shorts, appears to operate quite differently. Shorts seems to optimise more aggressively in response to user behaviour and show more extreme videos within a relatively brief timeframe. On Shorts, all accounts were shown vastly similar and sometimes even the same specific content from right-wing and self-described ‘alt-right’ content creators. The algorithm did not make any distinction between the underage and adult accounts in terms of the content served.

This type of content promotes warped views of masculinity, and encourages hateful, misogynistic and dehumanising attitudes toward women. Such content can also serve as a gateway into more extreme ideologies and online communities, and in some cases has led to violent attacks. There are growing calls in a number of countries, including Australia, to categorise so-called ‘incel’ attacks, which are motivated by extremist misogyny, as a form of terrorism.

As Australia seeks to confront the challenges of violence against women and pursue gender equality, it is concerning that a major social media platform appears to be actively promoting such content to its young male viewers without any deliberate prompting.

Ahead of the May 2022 Federal election, both major parties have made women’s safety and wellbeing a key component of their policy platforms. The April 2022 Federal Budget, which was passed by both parties, also made significant commitments to supporting women’s safety, equality and wellbeing. In particular, the Budget noted the need to address the drivers of violence against women.

The drivers of violence are complex, but dehumanising and disrespectful attitudes towards women clearly play a central role. The barrage of social media content which promotes misogynistic views and unhealthy perceptions of dating and relationships risks undermining the government’s efforts to prevent violence and to educate the community, particularly young men.

Based on these findings, we believe Australia needs to comprehensively reconsider the regulatory framework governing digital platforms to ensure that systemic, community risks such as those posed by YouTube’s algorithm, are adequately addressed. This includes:

  1. Focusing on community and societal risks, not only individual risks: Expand the definition of ‘online harms’ to address gender-based violence, and violence against women, girls, trans and gender diverse people. The current definitions of online safety – underpinning the Online Safety Act and other digital regulation - focus on individual harms, and fail to recognise societal or community threats.
  2. Regulation of systems and processes, not only content moderation: Content-based approaches to regulation, such as those in Australia, have had limited impact in other contexts on the proliferation of harmful content online. In particular, a major problem with these approaches is that although they can help with the removal of specific pieces of harmful content, they do not address the algorithmic amplification of extremist content. Mitigating systems and process risks requires introducing duties of care across the digital regulatory landscape, including the Online Safety Act and Privacy Act.
  3. Platform accountability and transparency: To address these risks systemically, we recommend that regulation is designed in a way that requires transparency from online platforms, and can compel them to demonstrate that their policies, processes and systems are designed and implemented with respect to the online harms. This might include requirements for algorithmic auditing, or data access for researchers and regulators to assess the effects of platform systems on harmful content and outcomes.

Strong regulators and enforced regulation: Ensure regulation is strong and enforced, by moving away from self- and co-regulation and resourcing and joining up regulators. Given the limited impact that self-regulation by social media platforms has had on this activity, it is increasingly evident that government regulation of platforms such as YouTube is necessary.