Response to the Safe and Responsible AI in Australia discussion paper
Reset Tech Australia submitted to the Government’s consultation on safe and responsible AI. Our submission provided general feedback to the proposed framework, and responses to the following questions:
- Question 9: Given the importance of transparency across the AI life cycle, when and where will transparency be most critical and valuable to mitigate potential AI risks and to improve public confidence?
- Question 14: Do we support a risk-based approach for addressing potential AI risks?
- Question 20: Should a risk based approach be voluntary or self-regulated, or be mandated through regulations?
Our recommendations can be summarised as:
- Voluntary and co-regulatory models should be avoided in developing the regulatory regime. They are ineffective and not harmonised with global best-practice.
- Risk-based approaches to AI are a fruitful way forward. However strict criteria around risk designation—including considerations around data provenance, uses and potential uses emerging from reductant capacities—needs to be drafted by regulators or legislators to avoid creating perverse incentives to inappropriately decrease risk reporting. This is in keeping with international approaches.
- Australia’s approach to AI, including our risk frameworks, need to embrace the precautionary principle, and this needs to be reflected in any and all risk identification or mitigation measures.
- Data provenance needs particular attention when considering transparency across the AI lifecycle, and designating risk levels.
- The role of consumer choice, including meaningful consent and privacy considerations, needs to be factored in when considering risk designations and risk-mitigations.
- Particular attention needs to be given to ensuring that any regulatory regime enshrines children and young people’s best interests.