Who are DIGI Transparency Reports for?

  • Mis and Dis information
Decorative image

In November of 2023, we wrote to Meta’s Australian office with our concerns about a statement in their last Transparency Report. We asserted that Meta made a potentially misleading statement about their efforts to reduce misinformation via a process known as ‘labelling’ (more on how this works below). We requested that Meta provide a clarification on their labelling system in their next Transparency Report, as well as issue a public correction to their last Transparency Report. We also requested Meta include Australia-specific data on labelling in their next report, including metadata on the content moderation process, including details on how many posts are labelled after user flagging, as well as details on labelling disputes.

Meta indicated they would provide some clarifying language in their next report but would not issue a public correction and could not provide any further data. We then advised DIGI of our intention to trigger the complaints process. DIGI’s independent complaints sub-committee are currently deciding if the complaint is eligible to be heard.

What are Transparency Reports?

Transparency Reports are the sole public transparency mechanism under the Australian Code of Practice on Disinformation and Misinformation (‘The Code’). Signatories to the Code prepare retrospective reports on their annual efforts to address misinformation and disinformation in Australia. The reports are reviewed by an independent consultant and then released to the public in May of each year.

What did Meta say?

Meta claimed in their last Transparency Report that they “appl[y] a warning label to content found to be false by third-party factchecking organisations”. Meta went on to claim they applied labels to “over 9 million distinct pieces of content on Facebook in Australia (including reshares)”.

What’s wrong with Meta’s statement?

Meta does not label all content that contains fact-checked falsehoods. We know this because Meta have explained this and we have tested it. We provided Meta with 152 examples where they had failed to label a piece of content that repeated a claim fact-checked as false by AAP. Each piece of content that went to Meta had been unanimously approved by a panel of 4 legal and subject-matter experts as matching a fact-check finding, and had been reported to the platform. However, only 8% were then labelled.

Meta’s explanation is that their technology only captures identical or near-identical versions of the original post referred to fact-checkers. This may seem academic, so here is an example of what happens in practice:

Claim Disproved by AAP Fact-CheckLanguage From Original Post in Referral to Fact-CheckersAnother Post
Russia and Australia are the only two countries still considered sovereign“Did you know ONLY 2 countries are still considered Sovereign? Russia & Australia”“There’s only two countries that are still sovereign in the world. Australia and Russia”

This statement (that Meta “appl[y] a warning label to content found to be false by third-party factchecking organisations”) is potentially misleading. We polled over 1000 Australians to gauge their interpretation of this statement. Without identifying Meta, we asked them if they thought the statement “apply a warning label to content found to be false by third-party factchecking organisations” meant either “all content containing facts found to be false by fact-checkers has a warning label applied” or “only individual posts found to be false by fact-checkers has a warning label applied”. In the poll, 44% of respondents went for “all”, and 35% went for “only individual posts”. The remaining 17% did not know.

After an exchange spanning from December 2023 to March 2024, Meta agreed to amend the language in their next Transparency Report but stopped short of issuing a correction to previous reports. We are escalating to the complaints process in hope that they can be compelled to issue a public correction.

Why does this matter?

Public understanding of fact-checking and labelling processes is already perilously low and Transparency Reports are in effect the only way the Australian public can access information on how platforms’ systems work to address misinformation and disinformation. Meta’s characterisation of their labelling approach implies that all fact-checked misinformation is labelled, when in practice only very few specific posts reviewed by fact-checkers are labelled. The polling demonstrated that this is misleading.

We are raising this complaint because there is a broader question at stake about “how transparent do Transparency Reports need to be”. Are Transparency Reports meant to explain to the public how platforms manage misinformation and disinformation on their platforms, or are they just meant to meet DIGI’s requirements?

The Code has a complaints facility, but the Terms of Reference state that complaints can only be eligibly raised if information in a Transparency Report is “materially false”. There is no meaningful distinction between a misleading statement and a false statement, and corporate law in Australia is very clear that deceptive and misleading conduct is prohibited. We appreciate Meta see a distinction between “misleading” and “materially false,” and that DIGI’s independent complaints sub-committee may agree with their view. However, this would set an alarming precedent — should the Code’s Terms of Reference be interpreted to effectively shelter platforms from complaints about potentially misleading conduct. If so, the Code would turn a blind eye to potential breaches of misleading and deceptive conduct, a well-recognised expectation for corporate behaviour in Australian law.

Australia lags behind the European Union, the United Kingdom, and possibly soon Canada on lacking a comprehensive transparency and accountability framework for digital platforms. Where other jurisdictions have regulatory ‘sticks’ to compel far more detailed platform disclosure, the Australian public only has these industry-regulated and industry-supervised Transparency Reports. This is all we have in Australia to help us to understand how platforms shape free speech. Given this, we would hope these reports would accurately inform the public about how platforms’ processes work, rather than just including a set of broad-brush statements that are effectively misleading. We can only wait and see what the independent complaints sub-committee understand the purpose of Transparency Reports to be.