Response to Meta: Safeguarding the integrity of our Federal election

Decorative image

On 2 May 2022, in collaboration with leading academics with expertise in mis- and disinformation, cyber abuse, and other online harms, Reset Australia sent Meta an open letter with 24 questions about safeguarding the integrity of our Federal election. On 6 May 2022, Josh Machin, Head of Public Policy, for Meta in Australia published answers to each of the questions in a public blog titled ‘Update on Meta’s work to support the 2022 Australian election’.

This document below is Reset Australia’s response. In some instances, we have asked further questions, or clarifications, as many of the answers are inadequate or incomplete.


We welcome the opportunity for factual clarification of the public record, and for this transparent and robust public exchange regarding an issue that is fundamental to the functioning of our democracy: Meta’s impact on the integrity of our Federal election. The public deserves much more detail and depth than was provided in your March 2022 blog post (titled ‘How Meta is preparing for the 2022 Australian election’). In the absence of mandatory transparency measures through binding regulation, untrustworthy companies such as Meta require their behaviour is being closely scrutinised - particularly during crucial moments such as the final week of the election campaign.

Our intention in writing to you was to enable a broader set of stakeholders to learn more about the details of your election plans, including the general public, civil society organisations that do not have contractual arrangements with you (as is the case with ASPI and First Draft), and a wider group of academics and experts (including the signatories to the original letter).

As stated in our original letter to you, “adequate regulatory frameworks are not yet in place” to holistically and systematically address the spread of mis- and disinformation and hate speech on digital platforms.

Your response fails to acknowledge that the Australian Code of Practice on Misinformation and Disinformation, drafted and administered by industry group DIGI (of which you are a founding member) has been critiqued for a variety of reasons. The March 2022 report by the House Select Committee on Social Media and Online Safety and June 2021 Australian Communication and Media Authority (ACMA) report which assessed the Code point to the need for stronger, more systematic regulation of our digital information ecosystem.

Given the original ACMA position paper for the Australian Code drew heavily on the EU Code of Practice on Disinformation, it is significant that just recently, in April 2022, the landmark EU legislation, the Digital Services Act (DSA) (companion to the Digital Markets Act (DMA)) was approved. The DSA represents a paradigm shift in tech regulation as it sets rules and standards for algorithmic systems in digital media markets. It requires greater transparency about platforms’ data and algorithms, including audits and fines of up to six percent of their annual sales for repeated infringements. Post-election, Australia has an opportunity to reduce this widening gap between our regulatory framework in Australia and the EU and consider which DSA elements could be applicable in our context (refer to Reset Australia policy brief titled ‘The future of digital regulation in Australia: Five policy principles for a safer digital world’ for further detail regarding aspects of the DSA that are relevant to Australia).

Given the inadequacies of the current Code, Meta’s reporting against it in the recent transparency report does not assist us with the task of evaluating the efficacy of Meta’s election safeguards. In the June 2021 report ACMA stated that signatories “lacked systematic data, metrics or key performance indicators (KPIs) that establish a baseline and enable the tracking of platform and industry performance against code outcomes over time”. An example of this is Meta’s statistic that it removed 14 million pieces of COVID-19 misinformation content between March 2020 and December 2020. This data is not comparative or success-oriented and hence gives no indication of the effectiveness of Meta’s content moderation systems.

Further to this, as former Facebook executive and whistleblower Frances Haugen stated in a testimony before the House Select Committee on Social Media and Online Safety, social media giants have kept the online safety discourse focused on content moderation systems that deal with harmful and illegal content downstream, rather than directing attention upstream to the algorithms that amplify this content in the first instance. These algorithms “have so much sway over our democratic outcomes”, she warned. Meta’s recent transparency report makes no mention of how harmful and misleading content is amplified by platform algorithms.