Report: Not Just Algorithms

  • Online Safety
  • Children and Young People
  • Data Security
  • Privacy and Data Protection
Decorative image

Many of the systems and elements that platforms build into their products create safety risks for end-users. However, only a very modest selection have been identified for regulatory scrutiny. As the government reviews the Basic Online Safety Expectations and Online Safety Act, the role of all systems and elements in creating risks need to be comprehensively addressed.

This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders. We ran experiments on a range of platforms (including TikTok, Instagram, Facebook, X and/or Google) and found that:

1. Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.

Specifically:

  • › On TikTok, 0% of the content recommended was classified as pro-eating disorder content;
  • › On Instagram, 23% of the content recommended was classified as pro-eating disorder content;
  • › On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).

2. Content moderation systems can create risks. We reported explicitly pro-eating disorder content and found that platforms failed to remove this content as they claim to in their policies, meaning it stayed visible on their platform in violation of their guidelines.

Specifically:

  • › On TikTok, 15.5% of 110 reported posts were removed;
  • › On Instagram, 6.3% of 175 reported posts were removed;
  • › On X, 6.0% of 100 reported posts were removed.

3. Ad approval systems can create risks. We created 12 ‘fake’ ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)

Specifically:

  • › On TikTok, 100% of the ads were approved to run;
  • › On Facebook, 83% of the ads were approved to run;
  • › On Google, 75% of the ads were approved to run.

4. Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.

Specifically:

  • › On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers’ eating disorder apps or visit their websites can be targeted;
  • › On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers’ eating disorder apps or visit their websites can be targeted;
  • › On X: End-users who follow pro- eating disorder accounts, or ‘look’ like them, can be targeted;
  • › On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.

Risks to Australians’ safety and wellbeing are manifesting in numerous online systems and a regulatory framework needs to incentivise platforms to proactively identify and comprehensively mitigate these risks.

To achieve this, we recommend that:

The Basic Online Safety Expectations be amended to include additional expectations that online service providers take reasonable steps regarding all systems and elements involved in the operation of their service. Of the four systems explored in this research, only recommender systems would be covered by the current proposals. Many others have not been investigated, such as search systems or engagement features, that will likewise create risks. Safety expectations should be broad and cover all systems and elements deployed by digital platforms.

The Online Safety Act review should implement:

  • An overarching duty of care on platforms;
  • Risk assessments and risk mitigation obligations across all systems and elements;
  • Meaningful transparency measures to make publicly visible the risks and mitigation measures created by systems and elements, and;
  • Strong accountability and enforcement mechanisms.