Report: Electoral process misinformation

  • Mis and Dis information
Decorative image

This rapid investigation set out to explore whether platforms remove electoral process misinformation when they are made aware of it via user-reporting. We found, reported and monitored a small number of posts on TikTok (25), Facebook (24) and X, formerly Twitter (50), that contained clear electoral process misinformation. This content largely centred around claims that Australian elections had been rigged, that ballots had or would be stolen, or that the Voice referendum vote was invalid or illegal.

Electoral process misinformation stands to harm both the Yes and No campaigns.

According to each platform’s community guidelines, this type of content, once detected, should be:

  • TikTok: Removed.
  • Facebook: Demoted in prevalence.
  • X: Either removed or labelled.

However, we found that none of the platforms are effectively enforcing their community guidelines, nor are they implementing meaningful responses based on their requirements under the Australian Code of Practice on Disinformation and Misinformation. Specifically:

1. Platforms appear to have few effective ‘organic’ content moderation processes to detect and respond to electoral process misinformation and disinformation.

2. Reporting electoral process misinformation appears to make little difference on Facebook and X, while it makes a moderate difference on TikTok.

3. Electoral process misinformation continues to grow in reach even after reporting, which suggests that it is not adequately being de-amplified. Growth accelerates slowly after reporting on TikTok, but it decelerates significantly on Facebook.

4. The nature of the content that becomes unavailable or is labelled does not appear to be substantively different to the content that remains, suggesting that the content moderation process is a ‘whack-a-mole’ rather than a systematic process.