Submission on extremist movements and radicalism in Australia

  • Data Security
Decorative image

To: Parliamentary Joint Committee on Intelligence and Security (PJCIS)
From: Reset Australia

Reset Australia would like to thank PJCIS for the opportunity to input on the inquiry into extremist movements and radicalism in Australia.

Reset Australia is an independent, non-partisan organisation committed to driving public policy advocacy, research, and civic engagement agendas to strengthen our democracy. We are the Australian affiliate of Reset, the global initiative working to counter digital threats to democracy. As the Australian partner in Reset’s international network, we bring a diversity of new ideas home and provide Australian thought-leaders access to a global stage.

Executive Summary

Our submission details the increasing role that digital platforms play in facilitating extremist rhetoric, radicalisation and violence, whilst providing tangible policy recommendations to arrest this trend. We recognise that there are many factors to radicalisation and the development of extremist ideology, however the scope will be limited to the role of the digital platforms.

The digital platforms have fundamentally shifted the information landscape, powered by the relentless collection of personal data. This information has been used to actively manipulate users to remain on their respective products, all with the aim of serving more ads. This system has contributed to the rise in some negative externalities - hate speech, disinformation, echo chambers and radicalisation. In order for us to approach this issue, we must first understand the system. As such, our organisation’s recommendations include:

  • Mandatory investigative and audit powers to understand how the platform’s algorithms perpetuate harm
  • A transparent, independent regulator with a clearly defined scope
  • An enforceable disinformation code that focuses on disrupting monetisation features that promote false information
  • Comprehensive digital literacy programs

Introduction

The rise in populism, ideological fanaticism and extremist movements exists in a complex system. As such, to provide meaningful input to this inquiry, our organisation’s submission must clearly define a scope that is within our area of expertise.

This submission will illustrate how an ‘attention economy’ propped up by the digital platforms has been a contributing factor in facilitating the general rise in extremism we see in the early 21st century. We will provide a set of recommendations that focus on this aspect of radicalisation, and in doing so will address points 3 (d) and (f) of this Inquiry.

However, 3 (f) states:

The role of social media, encrypted communications platforms and the dark web in allowing extremists to communicate and organise

Through this submission, we hope to challenge the framing of this point and illustrate how the role of social media isn’t limited to only communication or organisation, but as a gateway to further harm that has been engineered (unintentionally) to do so.

Context

Internet and social media usage has become ubiquitous to the Australian way of life. With over 85% of Australians using social media ‘most days’, the role of the digital platforms such as Facebook (Instagram, WhatsApp, Facebook), Twitter, TikTok and Google (Google and Youtube) play in our society have become fundamental to how we live, work and entertain ourselves. Whilst there has been literature on how social media platforms play a role in radicalisation as a formal recruitment tool and providing a space for extremist communities to interact, in this submission, we go further to posit the fundamental design of these platforms and the ‘attention economy’ they have engendered is a key facilitator for extremist ideologies to develop.

The Attention Economy

What all the digital platforms have in common, is the commoditisation of user attention as their fundamental resource. These platforms have successfully and efficiently monetised our collective attention to fuel multi-billion dollar profits.

The business models of the digital platforms have a single objective - to capture and maintain user attention in order to maximise advertisements served and profits generated. As such, the algorithms which dictate the content and information we consume are optimised to fulfil this objective, resulting in an attention economy i.e. the effective commoditisation of attention. To feed this machine, the platforms have built a system of unfettered and limitless personal data collection, building comprehensive profiles of their users that encapsulate their interests, vices, political leanings, triggers and vulnerabilities. This data is then used to predict our engagement behaviour, constantly calculating what content has the greatest potential for keeping us engaged. This content has been shown to lean towards the extreme and sensational, as it is more likely to have higher engagement,.

Whilst not the intended design, this system has had wide ranging impacts on our society. From the breakdown of public discourse due to targeted ‘filter bubble’ polarisation to the manipulation of this online architecture by malicious actors, the myriad of issues can be collectively characterised by their effective facilitation of the breakdown of our ‘public square’ - fracturing social cohesion, decreasing trust in government and halting productive civic debate.

Whilst research in this area is much needed, the digital platform algorithms which push users to more and more ‘engaging’ content can be linked to radicalisation pathways (an example is through YouTube’s recommender system). From the US Capitol Insurrection to the Christchurch Massacre, the role of the internet and social media platforms are becoming intrinsically linked with how extremist, radical and increasingly violent movements are manifesting. Even within ASIO’s Annual Threat Assessment, the Director-General specifically mentioned the rise of right-wing extremism to be of concern.

These are all data points that show that a problem is festering. What we need now, is an ability to understand how to understand how these harms occur. This action must enshrine stringent adherence to individual privacy, and focus on how the platforms themselves are actors in this ecosystem.

How the ‘Attention Economy’ facilitates harm

The ‘attention economy’ has two key features which constitutie components in a potential radicalisation pathway. Just to reiterate, research in this space is still evolving and as such these examples aren’t comprehensive, the full scope of these harms are still coming to light - but hopefully this begins to illustrate the emerging scale of this issue.

Proliferation of Hate Speech and Misinformation

Extremist movements, particularly right wing extremists operate heavily on an ‘anti-other’ narrative, and this is largely driven by the content consumed online. A recent report put out by the Centre for Resilient and Inclusive Societies (CRIS) found that key themes for online right wing extremist discussions include anti-minority, BLM, and Muslim rhetoric. While vilification of such groups exist before the digital age, the attention economy promotes outrageous content that would fuel such views.

This argument is supported by the Australian Muslim Advocacy Network (AMAN) report on extremist movements and radicalism, Facebook and Twitter’s auto-detection and content review systems cannot detect violations of their own policies, leading to questions about the reliability of their processes.

In addition to the direct targeting of minority communities in Australia, the experiences and identities of these communities have also been utilised in order to stoke division for both political and financial (through pushing divisive content that redirects to ad-heavy sites controlled by the profiting entity) motives. This includes:

  • A collection of 21 Facebook groups (including one from Australia) with over 1 million followers disseminating targeted Islamophobic content, for apparent financial gain
  • A network of Facebook pages run out of the Balkans profited from the manipulation of Australian public sentiment. Posts were designed to provoke outrage on hot button issues such as Islam, refugees and political correctness, driving clicks to stolen articles in order to earn revenue from Facebook’s ad network

The way these platforms have been designed leaves them not just vulnerable and open to bad actors, but incentives this inflammatory and divisive content due to its engagement.

Polarisation and Echo Chambers

A related but distinct phenomenon is how the digital platforms accelerate polarisation, creating ‘filter bubbles’ and ‘echo chambers’ for discourse that are the antithesis to the concept of Habermas’ ‘public sphere’. These can be characterised as when people are exposed to facts, ideas, people and news that adhere to and are consistent with their own political or sociological ideology - i.e. an information diet that is fuelled by confirmation bias.

Research has shown that in information-rich ecosystems, we have significant psychological limitations to ability to process this information. From a range of tendencies that make us seek out beliefs similar to our own (polarisation) to our ability to see other people’s choices which leads us down ‘group-think’ paths that reduce our desire to seek out new information - our inherent cognitive heuristics take on a completely different implication in the information laden world of the internet.

Indeed, where people do not have strong ideological convictions otherwise, social information can lead to herding and undermine collective wisdom - a clear theory for the piecemeal radicalisation we are seeing.

These psychological quirks are taken advantage of and exploited by the attention economy. Algorithmic curation systems drive users to content that is engaging, regardless of your cognitive bias - pushing users down ideological rabbit holes. Whilst this has been clearly demonstrated on Twitter (due to its more public platform) as early as the 2010 US midterm elections, and across various geographies - research into this on more private channels (such as Facebook groups, messaging forums, Youtube) is regularly stalled as these companies restrict access to researchers and public officials.

The consequences of this were clearly seen on January 6 with the US Capitol Riots, after months of stoking narratives of a stolen election, Donald Trump incited a group of people to storm the Capitol Building. Evidence on the drivers, mechanics and implications of this, and in particular the role of social media is still being researched, however it is clear that social media wasn’t just a communication tool - but a platform for radicalisation. This is especially concerning as users migrate to alternative platforms, with more relaxed community guidelines and vastly different patterns of content and engagement.

The consequences of users exposed to only their view of the world, wherein their engagement further reinforces their perspectives is deeply concerning.

Policy Approach

Too often, policy approaches related to dealing with ‘online safety’ issues have been focussed on content moderation. Whilst the takedown of material that is clearly false, misleading or clearly intended to divide and misinform is important, these policy approaches will always leave us playing catch-up. The speed in which content can be distributed and amplified to Australian users (especially the types of content used to target, manipulate and exploit diverse and diaspora communities) means that these types of approaches do not have the adaptivity required to respond.

Reset believes effective policy to counter these harms is rooted in transparency, privacy/data rights and public oversight. Online algorithms are an unregulated black box, and regulators should no longer stand on the other end waiting to play catch up. Digital platforms have become deeply embedded in modern society, and thus these platforms should be the focus of change. We must begin to also pull policy levers that look upstream. Rather than pulling down extremist content (which is still important), we must begin to unpack these algorithmic curation systems structurally and systematically.

Current Focus: Content takedown/ moderation

  • The problem: is seen to be caused by malicious actors, whether they be terrorists, cyberbullies or perpetrators of hate speech
  • The scope: is content which is illegal (black & white)
  • The solution: is seen the be policies which enforce platforms to deploy more robust content moderation practices (take down)

Future Focus: The attention economy

  • The problem: is seen to be the exploitation of user data & algorithms to maintain user attention, resulting in the amplification of extremist and sensational content
  • The scope: becomes design, practices and models that cause societal harm and division
  • The solution: is policies that promote transparency, regulate algorithmic amplification, and protect data rights and privacy |

Recommendations

Transparency and Public Oversight

The current model of self-regulation and self-reporting is insufficient and disproportionate to the potential harm to the public.

Information on these harms is held solely by the digital platforms, who do not make it available for transparent independent review under any circumstances. It seems extraordinary that the digital platform companies have all the data and tools needed to track, measure and evaluate these harms - indeed these tools are a core part of their business, but they make nothing available for public oversight, even as they avoid all but the most basic interventions to protect the public from harm.

An algorithmic audit would review processes by which the outputs of algorithmic systems (in this case the curation systems of the digital platforms which might radicalise users and promote disinformation) can be assessed for unfavourable, unwanted and/or harmful results.

As such, an independent regulator (such as the ACMA or eSafety Comission) must be empowered to have:

  • Compulsory audit and inspection powers
  • Enforced information-gathering powers that extend beyond just training data, but include evidence on policy, processes and outcomes
  • Powers to access and engage third-party expertise both within and outside government

Recommendation: Institute an audit authority under an independent regulator empowered to conduct mandatory investigations and audits on the impact of algorithmic amplification on Australian society.

These powers must operate under a system of transparency, legitimacy and due process, and should include checks and balances such as:

  • Mandatory transparency reporting
  • Avenues for recourse, objection and appeal for the digital platforms
  • Specific guidelines and scope for the audits

Additionally, there is a key gap in knowledge on this issue at the moment. Actors which serve the public interest - researchers, civil society, regulators - are operating in the dark when it comes to understanding the platform’s algorithmic systems, with selective, opaque and unclear access when it comes to research. This empowered regulator must work with industry to open up this access (with the appropriate privacy and trade-secret disclosure arrangement) so that research can be conducted.

This must have enforcement mechanisms that underpin it, as highlighted by the second assessment of the EU’s Disinformation Code which found that the goals under Pillar 5 (research cooperation) had largely not been achieved, with a ‘shared opinion amongst European researchers that the provision of data and search tools required to detect and analyse disinformation cases is still episodic and arbitrary, and does not respond to the full range of research needs’. Specifically, the discretionary approach the platforms take of entering into bilateral relationships with specific members of the academic and fact-checking community flies against the open and non-discriminatory approach needed for the levels of research, analysis and accountability required.

Recommendation: Commit to developing data sharing arrangements that empower academic researchers, civil sector actors and think tanks to undertake the requisite research on the role of social media and disinformation and radicalisation. These arrangements must preserve user privacy but also make good faith attempts to increase transparency on data that is vital for our understanding of disinformation (e.g. demographic data of user engagement, content engagement).

An example of a proposal for such an agreement can be found in a policy memo we developed called the Data Access Mandate for a Better COVID-19 Response in Australia. Whilst this memo focuses specifically on COVID-19 disinformation, this transparent data access proposal can and should be extended to other areas of disinformation research that impact our community.

The first step in crafting a solution is understanding the problem. As such, until there is greater transparency over the digital platform’s black box, we will never see true progress on this (and other) issues.

Disrupt Disinformation - an enforceable Code and disrupting monetisation incentives

address issues within the fundamental profit models of these digital platforms that have allowed for the propagation of disinformation. to address the underlying financial drivers that are used to propagate disinformation.

As highlighted within the EU’s Second Assessment of their Code of Practice on Disinformation, inconsistent implementation of measures intended to address placement of advertisements on platforms’ own services limited progress against this commitment. Additional challenges were seen regarding implementation of measures intended to limit ad placements on third-party websites that spread disinformation. Furthermore, the Assessment goes on to state that ‘the Code does not have a high enough public profile to put sufficient pressure for change on the platforms in this area’. These limitations were largely put to ineffective participation and collaboration by relevant stakeholders, including the advertising sector, fact-checking organisations and the platforms themselves.

The financial drivers which propagate disinformation represent the key opportunity for initial action, and this Objective is a valuable first step in the recognition of these responsible economic incentives. Whilst these measures have been referred to in the development of the ACMA’s Disinformation Code, a voluntary code will be wholly insufficient to fulfill these aims.

Recommendation: Work towards developing defined and enforceable guidance, practices to collaboration pathways that will effectively disrupt the economic drivers of disinformation, moving beyond the broad commitments and self-regulatory approaches. This should include:

  • Developing a common structure for risk assessment and escalation framework for ad accounts that propogate disinformation
  • Developing an application-approval system for actors intending on using advertising based on agreed-upon trustworthiness indicators
  • Defining concrete ways in which transparency can be embedded in on-platform advertisements to users, as well as wider transparency measures to the public and relevant stakeholders to ensure accountability.
  • Defining pathways for greater collaboration with other relevant stakeholders, in particular the advertising sector.

Digital literacy education

The information landscape is changing, and increasingly so every day. The infinite ability for people to obtain and share information is a social experiment that is actively unfolding. Governments (Federal and State), either within curriculum or through civil sector service providers must resource and development education materials that assist the next generation navigate this world.

Additionally, the root of many of these problems is the unregulated use and exploitation of personal data. As such, children must have the highest safeguards put in place, so that their data might be used to fuel the attention economy.

Recommendation: Develop a comprehensive policy which both provides young people with the educational tools in navigating online threats as well as rights which protect young people from certain behaviour. This should include:

  • Educational resources on how to identify fake news and how to take appropriate action
  • Establish the maximum amount of privacy protections for children - similar to the Age Appropriate Design Code in the UK
  • Commit to build ongoing literacy in regards to personal data and privacy rights

Thank you for the opportunity to engage with this inquiry. This submission was prepared by: Matt Nguyen and Amal Wehbe.

If you require any further information, please do not hesitate to get in contact with our organisation.