Report: Functioning or Failing? An evaluation of the efficacy of the Australian Code of Practice on Disinformation and Misinformation

  • Mis and Dis Information
  • Regulatory Models
Decorative image

This report summarises extensive experimental research and advocacy over 2023 and 2024. It explores how both digital platforms’ systems and Australia’s voluntary regulatory framework are not ‘fit for purpose’ when it comes to mitigating the spread of misinformation and disinformation.

Specifically, it documents systemic failings in:

  1. Platforms’ systems and processes regarding misinformation and disinformation. Notably, platforms’ content moderation systems and advertising approval systems failed to mitigate risks of spreading misinformation.
  2. Current oversight and transparency measures, which are in place under the Australian Code of Practice on Disinformation and Misinformation (the Code). There were strong discrepancies between platforms’ statements in transparency reports and evidentiary testing, and the complaints process was unable to adequately resolve issues.

Combined, this documents a complete failure of the current approach to mitigating against misinformation and disinformation in Australia.

 

 

This report recommends a more active role for regulation, and documents empirically-tested models for doing so. Specifically, it recommends a ‘five pillar’ framework:

1. Placing clear responsibilities on platforms to reduce the risks posed by misinformation and disinformation. These need to come from law and regulation, not industry. For example:

  • Empowering the ACMA to intervene and substitute the Code with a regulatory standard before a ‘total failure’ of the Code occurs. Where substantial deficiencies are evident, as they are currently, the ACMA should be able to act.
  • Replacing the industry-drafted and industry-supervised Australian Code of Practice on Disinformation and Misinformation with a regulator-drafted, regulator-supervised Code, developed in extensive consultation with independent researchers and civil society.
  • Considering a duty of care on platforms to protect end users from misinformation and disinformation.

2. Requiring proactive risk assessments for larger platforms. These could
be Australian versions of the risk assessment requirements that are already produced under the EU’s Digital Services Act, to reduce regulatory burden. Platforms would need to fill in a template produced by the regulator with specific sorts of information and levels of clarity, rather than leaving it to the platforms to craft and decide themselves.

3. Requiring platforms to take fair and reasonable steps to mitigate against the risks identified in their risk assessment.

4. An effective transparency regime. This includes for example, requiring:

  • Large platforms routinely publish transparency data, in prescribed ways, without ACMA requests needing to be made. This would help improve both public trust and transparency, as well as reduce the burden on ACMA. Effective transparency reporting requires clear direction, and clear prescriptions for reporting.2
  • Requiring researcher access to public interest data, enabling independent researchers to request relevant data from platforms. These requirements could mimic requirements established under the EU’s Digital Services Act, which means large platforms would not have to establish new systems to comply.

5. Effective accountability, including enabling regulators to take meaningful action against platforms.