Report: Misinformation in paid-for advertising
There are significant gaps in Australia’s regulatory framework when it comes to electoral misinformation and disinformation served through paid-for advertising. This creates notable vulnerabilities in Australia’s online information architecture, such as the growing problem of threats to electoral integrity.
This small research piece demonstrates issues with platform responses to electoral misinformation served through paid-for advertising and weaknesses in platform transparency reports to the Australian Code of Practice on Disinformation and Misinformation (the Code).
We put forward a range of paid-for ads containing explicit electoral misinformation for approval to run on Facebook, TikTok and X(Twitter). We found the following:
- TikTok’s system appeared to catch some political advertising and misinformation, but not the majority. We submitted ten ads containing paid-for misinformation to test TikTok’s ad approval system, and 70% were approved. TikTok approved seven ads, rejected one ad and did not review the final two after detecting the violating ad.
- Facebook’s system appeared entirely dependent on an advertiser’s self-declarations regarding the nature of the advertising, which evidently offers insufficient protection against bad actors. We submitted twenty ads containing paid-for misinformation to test Meta’s ad approval system, and 95% were approved. Meta approved all nineteen ads that were not self-identified as ‘political ads’, rejecting only one ad that we had voluntarily identified as a political ad.
- X’s (Twitter’s) system did not request self-identification for political ads, nor did their system detect or reject it. We submitted fifteen posts containing paid-for misinformation to test X’s ad approval system, and 100% were approved and scheduled to run.
For ethical reasons, none of these ads were run, as we cancelled them after gaining approval. To be clear, no misinformation was published as a result of this experiment.
Each platform creates an annual report around their handling of political advertising and misinformation in their transparency reports under the voluntary Code. However, none of these reports adequately addressed these issues.
This experiment suggests that self- reporting mechanisms under the Code may be weak and require more effective scrutiny. It is simply too easy to propagate electoral misinformation via paid-for ads. Either platform policies are inadequate, or the ad approval systems deployed by platforms are not up to the task of accurately detecting misinformation.
Legislators and regulators must consider a risk-based, independently assessable and more comprehensive approach to social media regulation to address these vulnerabilities.