Tag Archive for: Method for Ethical AI in Defence

New videos released in ‘A Method for Ethical AI in Defence’ series

Interviewees from the TAS CDLE video series

In a newly released series of videos on ethical AI, Trusted Autonomous Systems explores responsibility, governance, trust, law, and traceability for robotics, autonomous systems, and artificial intelligence.

These videos were produced by Trusted Autonomous Systems for the Centre for Defence Leadership & Ethics (CDLE) at the Australian Defence College.

The videos feature Chief Defence Science Prof Tanya Monro, ADF personnel, and representatives from Defence industries.

They explore topics of responsibility, governance, trust, law and traceability using a hypothetical science fiction scenario Striking Blind written by Australian Defence College Perry Group students in 2021. In the Striking Blind story, an Australian autonomy platform is deployed in a future operation with a fictional AI called ‘Mandela’.

The videos explore ethical and legal factors associated with this scenario. They highlight the need for maintaining robust oversight so that the ADF can benefit from AI and autonomous systems while addressing their complex challenges.

Designed for the full Defence learning continuum, the videos are based on the framework and pragmatic tools described in the Defence Science technical report A Method in Ethical AI in Defence (2021).

Screenshots from the animations produced in the video series

Using the framework the videos cover:

While the method does not represent the views of the Australian Government, it provides an evidence-based collaborative framework relevant to Australian Defence contexts of use as well as ethical and legal considerations aligned with international best practice.

These videos provide both an overview and an in-depth exploration of A Method for Ethical AI in Defence and can be used within professional military education and by external stakeholders of Defence, including academia.

How to use the videos
  1. Use animations as thought prompts within presentations and workshops on robotics, autonomous systems, and artificial intelligence amongst Defence stakeholders.
  2. Use longer videos in a ‘flipped classroom’ model of learning for professional education and training
  3. Use videos in multi-stakeholder meetings to establish a shared framework within which to identify ethical and legal risks for robotics, autonomous systems and artificial intelligence projects for Defence
  4. Use pragmatic tools videos to establish processes for the identification and management of ethical and legal risks on RAS-AI projects
How to cite the videos

Producer Tara Roberson (Trusted Autonomous Systems)

Creative Director Kate Devitt (Trusted Autonomous Systems)

Publisher Trusted Autonomous Systems

Production Company Explanimate

Sponsor Centre for Defence Leadership & Ethics, Australian Defence College

With thanks to all interviewees who appeared in the videos: Stephen Bornstein, Damian Copeland, Kate Devitt, Michael Gan, Chris Hall, Sean Hamilton, Lachlan Jones, Rebecca Marlow, Tanya Monro AC, Mick Ryan AM, Lauren Sanders, Jason Scholz & Dominic Tracey.

Cite as: Roberson, T. & Devitt, S.K. (2022). Ethics of Robotics, Autonomous Systems and Artificial Intelligence Videos for Defence. [14 Videos] Trusted Autonomous Systems. https://tasdcrc.com.au/ethical-ai-defence-videos/

The transcribed video series is available on the TAS website.