Report: Is political content over- or under-moderated?

  • Mis and Dis information
Decorative image

Content moderation on social media can result in the removal, demotion or labelling of content that platforms deem to have violated their rules. Moderation is an important tool in mitigating systemic risks on platforms, especially when it comes from misinformation and disinformation. However, if moderation goes wrong, it can lead to content being either over-moderated (too much being inappropriately taken down) or under-moderation (not enough content that violates platform’s rules being taken down). Platforms can also create conditions of potential political bias if they over- or under-moderate in particular ways.

This research set out to see if the content moderation systems of three major platforms—TikTok, Facebook and X—was over- or under-moderated, and if it displayed political bias when it came to content relating to the Voice referendum in Australia. We tested for differing levels of ‘over-moderation’, or where platforms had inappropriately removed, demoted or labelled Yes-aligned or No-aligned content. We also examined differing levels of ‘under-moderation’, which involved instances where platforms had failed to remove, demote or label misleading Yes-aligned or No-aligned content that violated their guidelines. Subsequently, we uncovered the following findings:

  • Over-moderation: we found limited evidence of platform over-moderation. The techniques used in this research encourage overestimation, but even these overestimates ranged from 0.25% on Facebook to 2% on X. There is limited evidence of bias, however, we found X may over-moderate #VoteNo content, and Facebook appears to favour #VoteNo content in its video recommender algorithm to a five-fold magnitude.
  • Under-moderation: our findings suggest misinformation was substantially under-moderated across all three platforms. Misleading content regarding electoral processes that violates each of the platforms’ community guidelines was not removed when platforms became aware of it. Between 75% and 100% of misinformation was subject to under-moderation, depending on the platform and its substance. No political bias was detected in these processes.

These findings suggest that the platforms’ content moderation systems were not significantly biased in terms of moderating Yes- or No-aligned content. Consistent with our earlier research, there remains a substantial, potentially systemic issue regarding under-moderation of misinformation.

Furthermore, this research suggests that the measures from the Australian Code of Practice on Disinformation and Misinformation might not be effectively preventing the under-moderation of content. It is also evident that the signatories’ transparency reports have not identified the issues highlighted by this research.