Responsible AI for Defence (Consultation)
Introducing the Responsible AI for Defence (RAID) Toolkit
Trusted Autonomous Systems (TAS) present the Responsible AI for Defence (RAID) Toolkit consultation draft to industry and responsible AI experts. The RAID Toolkit is designed for industry to facilitate communication with Defence about ethical and legal risks for their technologies. It is not a Defence product. The Toolkit comprises three main tools for different stages of the acquisition process:
- AI Checklist (a quick set of questions to better understand the AI capability and to prompt risk identification)
- Risk Register (spreadsheet to record and track ethical and legal risks)
- Legal and Ethical Assurance Program Plan (LEAPP) (more detailed consideration of ethical and legal risks particularly useful in the lead up to an article 36 weapons review).
Introductory Materials
A. Responsible AI for Defence Guidance Materials for Defence Industry
B. [Not yet released]
The Tools
D. RAID Checklist Complete (including Guidance Materials)
D1. RAID Checklist Question Summary
Case Studies
G. [Not yet released]
H. [Not yet released]
I. [Not yet released]
Drafting Information
J. Measurable Elements Methodology Paper
RAID resources refer to Australian domestic and international legal obligations and align ethical risk identification with Australian and international ethics frameworks to ensure interoperability with our allies as well as emerging best practice globally. The RAID Toolkit offers a uniquely Australian focus while aligning with international best practice, including two recent releases from the United States in the form of the Department of Defence Directive 3000.09 Autonomy in Weapons Systems 10 Year Update and National Institute of Standards and Technology AI Risk Management Framework. The Toolkit adapts the OECD Classification of AI Systems Framework, making it suitable in consideration of ethical and legal risk in the military domain.
Designed for industry
Designed especially for Defence industry in accordance with TAS’ aims and objectives, RAID Toolkit incorporates national and international best practice in responsible AI and Australia’s legal obligations under international humanitarian law including Australia’s commitment to Article 36 reviews of all new weapons, means and methods of warfare.
While completing the RAID Toolkit will not confer acquisition advantages to Defence Industry, using the documents will provide reference materials to facilitate more efficient and effective communication between industry and Defence.
Accelerating autonomy
The speed of technological innovation and numerous uses of autonomous systems means that ethical and legal risks must be identified, considered, and managed throughout the lifecycle. The RAID Toolkit accelerates the operationalisation of RAS-AI capabilities for Australia.
The RAID Toolkit will assist industry in the design and delivery of autonomous systems and robotics technology with clear translation into deployable defence programs and capabilities for Australian Defence. The Toolkit will help
- build an environment in which the Australian industry has the capacity and skills to deliver complex autonomous systems both to Australian Defence and as integral members of the global defence supply chain
- increase the speed to reach a deployable state for trusted autonomous systems
- increase the scalability and reduce the cost of autonomous systems technology solutions and
- educate in the ethics and legal aspects of autonomous systems.
TAS is a government-funded research organisation that receives funding through the Next Generation Technologies Fund and Queensland State Government. TAS is not part of the Australian Government and not part of Defence. TAS is not seeking feedback on behalf of Defence or the Commonwealth of Australia.
The research for the RAID Toolkit received funding from the Australian Government through Trusted Autonomous Systems, a Defence Cooperative Research Centre funded through the Next Generation Technologies Fund.