New TAS project to develop a Detect and Avoid (DAA) Design, Test and Evaluation (DT&E) guideline for low-risk, uncontrolled airspace outside the airport environment

By Tom Putland – Director of Autonomy Accreditation – Air

The development of best practice policy, appropriate standards, and a strong assurance and accreditation culture has the potential to enhance innovation and support market growth for drones with autonomous abilities in the maritime, air and land domains.

The Trusted Autonomous Systems  (TAS) National Accreditation Support Facility Pathfinder Project (NASF-P), under the Assurance of Autonomy Activity, represents an opportunity to unlock Queensland’s, and by extension Australia’s, capacity for translating autonomous system innovation into deployments, given the existing test facilities operating in Queensland, the existing industry need identified and strong government backing.

The overarching purpose of the NASF-P is to:

  • Make it easier to design, build, test, assure, accredit, and operate an autonomous system in Australia, without compromising safety; and
  • Support and promote Queensland’s existing test ranges; and
  • Encourage both domestic and international business to operate in, and use Queensland as a base for the purpose of testing, assuring, and accrediting autonomous systems; and
  • Investigate, design, and facilitate the creation of an appropriate, independent third party entity that can continue to support the design, build, test, assurance, accreditation, and operation of autonomous system in Australia, by bridging the gap between industry, operators, and regulators.

New project: Development of a DAA DT&E guideline

In the air domain specifically, the largest impediment to the integration of unmanned aircraft (regardless of autonomy) into the National Airspace System (NAS) is complying with the intent[1] of the See and Avoid (SAA) requirements detailed within Regulations 163 and 163A of the Civil Aviation Regulations 1988 (Cth), particularly in Beyond Visual Line of Sight (BVLOS) operations. In lieu of a solution, operators and the regulator must go through a labour-intensive stakeholder engagement process with all aviation parties to prevent mid-air collisions. This approach will not scale to the projected numbers of Unmanned Aerial Systems (UAS) operations into the future.

An autonomous/highly automated DAA system that complies with the safety objectives of CAR 163 and CAR 163A is a key enabling technology for integration into complex Australian airspace and will form an integral part of the safety assurance framework for UAS operations into the future.

The TAS team have initiated a new project, led by Revolution Aerospace, to develop a new Detect and Avoid (DAA) Design, Test and Evaluation (DT&E) guideline for low-risk, uncontrolled airspace outside the airport environment. This is particularly relevant to Australian unmanned aircraft operations.

Dr Terry Martin (CEO) and the Revolution Aerospace team have a wealth of world-leading experience in Detect and Avoid, Machine Learning, Safety Assurance, Verification & Validation (V&V), and Test & Evaluation (T&E), representing Australia at international forums such as NATO, JARUS, and RTCA (to name a few).

Dr Terrence Martin, CEO of Revolution Aerospace

Terry is supported in this project by TAS’s Tom Putland – Director of Autonomy Assurance in the Air Domain. Tom has extensive experience in the regulation and safety assurance of UAS, having previously worked for CASA, and being CASA’s representative at JARUS and other international working groups. This team has the expertise, drive, and ability to solve this critical airspace problem for Australia.

Tom Putland, Director of Autonomy Assurance – Air at TAS

DT&E guideline will create a process acceptable to CASA[2] that allows:

  • the derivation of high-level safety objectives; and
  • development of Verification and Validation (V&V) requirements; and
  • the conduct of relevant simulations and tests to demonstrate compliance with the safety objectives; and
  • the collation of compliance process and data into a package to support regulators (e.g. CASA) in issuing an approval for the operation.

TAS will engage closely with key stakeholders, including CASA, the Australian Association for Unmanned Systems (AAUS) and other industry members to ensure the process reflects current best practice, and is appropriate and useful for the Australian aviation industry. The intent is for the new process to be available for testing by the end of the year.

Upon completion of this project a subsequent project will be undertaken to work with industry partners and the Queensland Flight Test Range, Australia’s first and only commercial flight test range, to utilise and comply with this guideline. Completion of these projects will increase the access and flexibility available for unmanned aircraft operations in Australian airspace.

If you are interested in learning more about these projects, be involved as an industry test partner, or discuss specifics (i.e. EO/IR sensing, classifiers/learning assurance, alerting and avoidance logic, formal verification, T&E), please email tom.putland@tasdcrc.com.au.

Other NASF-P projects underway

The NASF-P team have a number of projects underway, including:

  • Preparation of a Body of Knowledge on the assurance and accreditation of autonomous systems;
  • Maritime domain: development of a repeatable, regulator-accepted methodology to demonstrate compliance with COLREGS for autonomous and remotely operated vessels; and
  • Preparation of a business case for a new, independent, National Accreditation Support Facility, based in Queensland, that will better connect operators and regulators to facilitate more efficient assurance and accreditation.

If you would like to find out more about our work, or provide feedback on where you see the key risks and opportunities for the autonomous systems industry in Australia, please contact us as NASFP@tasdcrc.com.au.

 

[1] Through an approval under Subregulation 101.073(2) and Regulation 101.029 of the Civil Aviation Safety Regulations 1998 (Cth).

[2] Regular engagement with CASA will assist to ensure the final process is acceptable to them, but note it has not been approved  at this point in time. In the future, the process is intended to form part of an acceptable means of compliance for a BVLOS approval.

Video series – Introduction to ethical robotics, autonomous systems and artificial intelligence (RAS-AI) in Defence and pragmatic tools to manage these risks.

by Dr Kate Devitt, Chief Scientist, Trusted Autonomous Systems

 

“Military ethics should be considered as a core competency that needs to be updated and refreshed if it is to be maintained”

Inspector General ADF, 2020, p.508

The Centre for Defence Leadership & Ethics Australian Defence College has commissioned Trusted Autonomous Systems to produce videos and discussion prompts on the ethics of robotics, autonomous systems and artificial intelligence.

These videos for Defence members are intended to build knowledge of theoretical frameworks relevant to potential uses of RAS-AI to improve ethical decision-making and autonomy across warfighting and rear-echelon contexts in Defence.

Major General Mick Ryan says that he can “foresee a day where instead of having one autonomous system for ten or a hundred people in the ADF will have a ratio, that’s the opposite. We might have a hundred or a thousand for every person in the ADF”.

He asks, “how do we team robotics, autonomous systems and artificial intelligence (RAS-AI) with people in a way that makes us more likely to be successful in missions, from warfighting through to humanitarian assistance, disaster relief; and do it in a way that accords with the values of Australia and our institutional values?”

Wing Commander Michael Gan says, “robotics and autonomous systems have a great deal of utility: They can reduce casualties, reduce risk, they can be operated in areas that may be radioactive or unsafe for personnel. They can also use their capabilities to go through large amounts of data and be effective or respond very quickly to rapidly emerging threats”.

He goes on to say “however, because a lot of this is using some sort of autonomous reasoning to make decisions, we have to make sure that we have a connection with the decisions that are being made, whether it is in the building phase, whether it is in the training phase, whether it is in the data, which underpins the artificial intelligence, robotic autonomous systems”.

Trusted Autonomous Systems CEO, Professor Jason Scholz points out that “Defence has a set of behaviours about acting with purpose for defence and the nation; being adaptable, innovative, and agile; be collaborative and team-focused; and to be accountable and trustworthy to reflect, learn and improve; and to be inclusive and value others. All of these values and behaviours are included whether we are a ‘robotic and autonomous systems’ augmented force, or not”.

Managing Director of Athena Artificial Intelligence Mr Stephen Bornstein says, “When it comes to RAS-AI in Defence and ethics associated with them…. it’s very important to consider how a given company or a given AI supplier is establishing trust in that RAS-AI product”. He says that “ultimately, that assurance should be the most important thing before we start giving technologies to soldiers, seamen, or aircrew”.

Personnel engaging with the content should gain a clearer idea how to reflect on ethical issues that affect human and RAS-AI decision making in defence contexts of use including the limits and affordances of human and technologies to enhance ethical decision-making, as well as frameworks to help with RAS-AI development, evaluation, acquisition, deployment and review in Defence.

The videos draw on a framework from The Defence Science & Technology report ‘A Method for Ethical AI in Defence’ to help Defence operators, commanders, testers or designers ask five key questions about the technologies they’re working with.

  1. Responsibility – who is responsible for AI?
  2. Governance – how is AI controlled?
  3. Trust – how can AI be trusted?
  4. Law – how can AI be used lawfully?
  5. Traceability – how are the actions of AI recorded?

The videos consider four tools that may assist in identifying, managing and mitigating ethical risks in Defence AI systems.

https://theodi.org/article/data-ethics-canvas/

The ‘Data Ethics Canvas’ by the Open Data Institute encourages you to ask important questions about projects that use data and reflect on the responses. Such as the security and privacy of data collected and used, who could be negatively affected and how to minimise negative impacts.

The AI Ethics Checklist ensures AI developers know: the military context the AI is for, the sorts of decisions being made, how to create the right scenarios, and how to employ the appropriate subject-matter experts, to evaluate, verify and validate the AI.

The Ethical AI Risk Matrix is a project risk management tool to identify and describe identified risks and proposed treatment. The matrix assigns individuals and groups to be responsible for reducing ethical risk through concrete actions on an agreed timeline and review schedule.

International Weapons Review delivering Law for AI Basics workshops, part of the TAS Ethics Uplift Program

Article 36 of Additional Protocol 1 (1977) of the Geneva Convention (1949) requires:

“In the study, development, acquisition or adoption of a new weapon, means or

method of warfare, a High Contracting Party is under an obligation to determine

whether its employment would, in some or all circumstances, be prohibited by this

Protocol or by any other rule of international law applicable to the High

Contracting Party”.

The rise of robotics Robotic, Autonomous Systems and Artificial Intelligence (RAS-AI) requires new methods to ensure compliance with the requirements of an Article 36 review of all new weapons, means or methods of warfare.

On 17 May 2021, Trusted Autonomous Systems (TAS) hosted International Weapons Review (IWR) ‘Law for AI Basics’ course for TAS participants and associated research personnel. IWR’s legal experts introduced international and domestic legal issues relevant to the design and acquisition of AI systems for use by Defence in Australia and identified legal inputs to ethical AI design in Defence.

The workshop covered Australian legal and ethical compliance requirements for Trusted Autonomous Systems. The Article 36 Review processes and issues relevant to autonomous systems, five facets of Ethical AI in Defence (responsibility, governance, trust, law and traceability) and requirements of the Legal and Ethical Assurance Program Plan (LEAPP). Workshops are available to stakeholders of Australian Defence including Defence Industries, Government, Universities, ADF and Defence.

Human machine teaming with RAS-AI will be a key ADF capability in the future. RAS-AI may increase safety for personnel, removing them from high-threat environments; increase the fidelity and speed of human awareness and decision-making; and reduce the cost and risk to manned platforms.

The development and RAS-AI investment must be informed by ethical and legal considerations and constraints. To achieve this, in February 2021, TAS commenced the Ethics Uplift Program (EUP) to provide immediate and ongoing assistance to TAS participants through consultation, advice and policy development, supported by case analysis, education and enculturation.

The training is designed to enable participants to understand, analyse and evaluate legal issues and risks that are relevant to the design and development of trusted autonomous systems, using case studies. This introductory course is aimed at technical staff responsible for design and development of AI systems and managers responsible for oversight of technical staff.  IWR, led by Dr Lauren Sander and Mr Damian Copeland, offers unique expertise in international law relevant to the development of new weapons, means and methods of warfare, including Article 36 weapon review requirements.

Dr Lauren Sanders is a legal practitioner whose doctoral studies were in international criminal law accountability measures, and whose expertise is in the practice of international humanitarian law including advising on the accreditation and use of new and novel weapons technology. She has over twenty years of military experience and has advised the ADF on the laws applicable to military operations in Iraq and Afghanistan and domestic terrorism operations.

Damian Copeland is a legal practitioner whose expertise and doctoral studies are in the Article 36 legal review of weapons, specifically focused on weapons and systems enhanced by Artificial Intelligence.  He is a weapons law expert with over twenty-five years military service, including multiple operational deployments where he has extensive experience in the application of operational law in support of ADF operations.

Learn more about the range of IWR services at https://internationalweaponsreview.com/

New TAS project to develop an Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous and Remotely Operated Vessels in 2021

By Rachel Horne – Assurance of Autonomy Lead/Director of Autonomy Accreditation – Maritime

Autonomous systems technology offers the ability to increase safety and efficiency, while lowering economic and environmental cost. While some level of autonomy has been seen in commercial products for a number of years, for example the basic thermostat or the Roomba, in the last five years there has been a rapid acceleration in the capacity and availability of unmanned aerial vehicles known as drones, and in uncrewed surface and sub-surface vessels, also called autonomous vessels.

For this rapid acceleration to continue, and to ensure this technology can integrate into commercial and defence operations, autonomous systems need to be trusted by the government, regulators, operators, and the broader community. An integral part of gaining trust is having a clear, well-tailored regulatory framework, consistent assurance requirements and agreed assurance methodology, and support from the regulator. These same factors also facilitate innovation and promote growth in industry by providing certainty.

Coral AUV. Image by AIMS

New project: Development of an Australian Code of Practice

The NASF-P (National Accreditation Support Facility Pathfinder) team have commenced a number of new projects to address the challenges outlined above. One of these projects is aimed at addressing the lack of tailored standards for autonomous and remotely operated vessels by developing an Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous and Remotely Operated Vessels. This Code will represent best practice, and is intended to provide certainty for industry by providing a set of regulator-acknowledged standards that they can use to design, construct, survey and operate autonomous and remotely operated vessels. The Code of Practice will be voluntary, and will be updated periodically.

This project, led by Maaike Vanderkooi on behalf of TAS, will begin with a review of available Codes of Practice and Standards, for example the UK Maritime Autonomous Surface Ships (MASS) UK Industry Conduct Principles and Code of Practice, and Lloyd’s Register Unmanned Marine Systems Code. The project will then develop a draft Australian Code of Practice, using input from key stakeholders, which will then be released for broader public consultation. The intent is to release a draft Code of Practice by October 2021, which will be available for use by industry and the regulator.

Maaike Vanderkooi has been chosen to lead the project as a result of her extensive experience in developing regulatory frameworks in the maritime, heavy vehicle and ports arenas, and her experience in developing, reviewing and impact assessing commercial vessel standards.

Maaike Vanderkooi

TAS will engage closely with key stakeholders, including the Australian Maritime Safety Authority (AMSA), the Australian Association for Unmanned Systems (AAUS) Maritime Working Group, the Marine Surveyors Association Inc, and the Australasian Institute of Marine Surveyors, throughout this project to ensure the Code of Practice is practical and appropriate for use by Australian industry and the regulator. There will also be opportunities for input by interested parties throughout the project.

Engagement opportunities

  • We are looking for people with direct experience applying current Codes of Practice or Standards to autonomous and remotely operated vessels, to discuss their experience and provide feedback to us in May 2021;
  • We will hold a series of workshops with key stakeholders between May and August 2021; and
  • We will release the draft Code of Practice for public consultation in August 2021, and welcome all thoughts and feedback.

If you would like to contact us in relation to this project, to offer feedback, suggestions, or your assistance, please email us at NASFP@tasdcrc.com.au.

QUT WAM-V in operation at AIMS. Image by AIMS

Other NASF-P projects underway

The NASF-P team have a number of projects underway, including:

  • Preparation of a Body of Knowledge on the assurance and accreditation of autonomous systems;
  • Air domain: development of an end-to-end acceptable process for the design, build, test and evaluation of autonomous detect and avoid (DAA) systems for certain types of airspace;
  • Maritime domain: development of a repeatable, regulator-accepted methodology to demonstrate compliance with COLREGS for autonomous and remotely operated vessels; and
  • Preparation of a business case for a new, independent, National Accreditation Support Facility, based in Queensland, that will better connect operators and regulators to facilitate more efficient assurance and accreditation.

The NASF-P team recently worked with Queensland AI Hub, Australian Institute of Marine Science, and AMC Search, supported by Advance Queensland, to deliver a world-first pilot course ‘Autonomous Marine Systems Fundamentals for Marine Surveyors’. This course, which was created to address the gap in experience with autonomous marine systems amongst the accredited marine surveyor community, had nine participants from around Queensland.

Participants of the pilot course at AIMS, March 2021. Image by TAS

If you would like to find out more about our work, or provide feedback on where you see the key risks and opportunities for the autonomous systems industry in Australia, please contact us as NASFP@tasdcrc.com.au

TAS Research Fellows (three of the four) featured in a University of Queensland Blog

Read about the work of three of the Trusted Autonomous Systems Research Fellows here.

Autonomous Marine Systems Fundamentals for Marine Surveyors

Trusted Autonomous Systems and the Queensland AI Hub supported the delivery of a world first pilot course ‘Autonomous Marine Systems Fundamentals for Marine Surveyors’ by the Australian Maritime College Search (AMC Search). The course ran at the Australian Institute of Marine Science (AIMS) in Townsville on 29 and 30 March.

The course introduces participants to autonomous and unmanned vessel technology, key terminology, operating concepts and system capabilities, limitations and risks. The participants were existing qualified accredited Marine Surveyors and other professionals, including representatives of Australian Maritime Safety Authority (AMSA), Port of Townsville and Maritime Safety Queensland (MSQ), who will benefit from increasing their knowledge and understanding of Autonomous Marine Systems.

The course enabled participants to more effectively undertake survey and other assurance and accreditation activities relating to autonomous and unmanned vessel technology. By upskilling ten accredited marine surveyors and professionals, this pilot course will address a gap in the current assurance and accreditation framework for autonomous marine systems.

This pilot course will make it more efficient and effective for Queensland businesses to build, test and certify their autonomous marine systems. It is a key opportunity to utilise the AIMS maritime facility and to borrow on the AI expertise of TAS and AMC Search. These learnings may then lead to additional opportunities to accelerate autonomous maritime systems development in Queensland as part of our commitment to the Smart Drone State.

This world first Autonomous Systems Training for Marine Surveyors was developed in partnership between Australian Maritime College Search (AMC Search), Trusted Autonomous Systems (TAS), Queensland AI Hub and Australian
Institute of Marine Science (AIMS).

To view or download our media release, click here

A Method for Ethical AI in Defence

Today the Australian Department of Defence released ‘A Method for Ethical AI in Defence’ an outcome of a workshop in 2019 attended by over 100 representatives from Defence, other Australian government agencies, industry, academia, international organisations and media. The workshop was facilitated by Defence Science & Technology Group, RAAF Plan Jericho and Trusted Autonomous Systems Defence Cooperative Research Centre. Defence note that the report outlines a pragmatic ethical methodology for communication between software engineers, integrators and operators during the development and operation of Artificial Intelligence (AI) projects in Defence.

Trusted Autonomous Systems CEO Professor Jason Scholz said ”Trusted Autonomous Systems are very pleased to partner with Defence on this critical issue of ethics in AI. Ethics is a fundamental consideration across the game-changing Projects that TAS are bringing together with Defence, Industry and Research Institutions.”

AI and human machine teaming will be a key capability in the future of Australian Defence systems. Chief Defence Scientist Tanya Monro notes “…AI technologies offer many benefits such as saving lives by removing humans from high-threat environments and improving Australian advantage by providing more in-depth and faster situational awareness”.

Air Vice-Marshal Cath Roberts, Head of Air Force Capability said “artificial intelligence and human-machine teaming will play a pivotal role for air and space power into the future… We need to ensure that ethical, moral and legal issues are resolved at the same pace as the technology is developed. This paper is useful in suggesting consideration of ethical issues that may arise to ensure responsibility for AI systems within traceable systems of control”. These comments are equally important to the other service arms.

In 2019, the Trusted Autonomous Systems Defence CRC (TASDCRC) commenced a six-year Programme on the Ethics and Law of Trusted Autonomous Systems valued at $9M. Over the past two years the activity has conducted workshops, engagements and consultation with participants and stakeholders of the Centre, contributing to ADF strategy, producing diverse publications and influencing the design of trusted autonomous systems such as the game-changing Athena AI ethical and legal decision support system.

From 2021 the Ethics Uplift Program (EUP) of the TASDCRC will offer ongoing assistance to Centre participants through consultation, advice and policy development, supported by case analysis, education and enculturation

The Trusted Autonomous Systems affiliate researchers and employees participate in a wide range of events in consideration of the ethics and law of RAS-AI such as  ICRC, UNIDIR SIPRI, and NATO.

TASDCRC is a non-government participant in the United Nations (UN) Group of Governmental Experts (GGE) on Lethal Autonomous Systems (LAWS) to ensure the development of autonomous systems accord with ethical principles, the laws of armed conflict (LOAC) and in abidance with Article 36 weapons reviews.

The Defence Media Release reinforced that “The ethics of AI and autonomous systems is an ongoing priority and Defence is committed to developing, communicating, applying and evolving ethical AI frameworks”. Trusted Autonomous Systems are a partner to Defence on that journey. More details at https://www.dst.defence.gov.au/publication/method-ethical-ai-defence

Book release – Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare

New Oxford University Press volume released: Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare 

The question of whether new rules or regulations are required to govern, restrict, or even prohibit the use of autonomous weapon systems has been the subject of debate for the better part of a decade. Despite the claims of advocacy groups, the way ahead remains unclear since the international community has yet to agree on a specific definition of Lethal Autonomous Weapon Systems and the great powers have largely refused to support an effective ban.  Lethal Autonomous Weapons focuses on exploring the moral and legal issues associated with the design, development and deployment of lethal autonomous weapons.  

The book features chapters by current and former TAS collaborators including CEO Prof. Jason Scholz, Chief Scientist Kate Devitt, Prof Rain Liivoja, Dr Tim McFarland, Dr Jai Galliott and Dr Bianca Baggiarini. 

Available hard and soft copies, more details at publisher site: 

https://global.oup.com/academic/product/lethal-autonomous-weapons-9780197546048?cc=au&lang=en& 

Also available in soft copy on a number of platforms.  

TASDCRC Activity on Ethics and Law of Trusted Autonomous Systems

Human machine teaming with Robotic, Autonomous Systems and Artificial Intelligence (RAS-AI) will be a key capability in the future of Australian Defence systems. RAS-AI may increase safety for personnel, removing them from high-threat environments; increase the fidelity and speed of human awareness and decision-making; and reduce the cost and risk to manned platforms. This RAS-AI investment must be informed by ethical and legal considerations and constraints.

Figure 1. Engineers Australia has awarded Athena AI as an engineering breakthrough that has the ability to identify protected objects, people and symbols, such as hospitals, in near real time for military operations using computer vision at very high probabilities. A funded TASDCRC project led by Cyborg Dynamics Engineering and Skyborne Technologies

In 2019, the Trusted Autonomous Systems Defence CRC (TASDCRC) commenced a six-year Programme on the Ethics and Law of Trusted Autonomous Systems valued at $9M. Over the past two years the activity has conducted workshops, engagements and consultation with participants and stakeholders of the Centre, contributing to ADF strategy, producing diverse publications and influencing the design of trusted autonomous systems such as the game-changing Athena AI ethical and legal decision support system—see Figure 1.

The Trusted Autonomous Systems affiliate researchers and employees participate in a wide range of events in consideration of the ethics and law of RAS-AI such as  ICRC, UNIDIR SIPRI, and NATO.

TASDCRC is a non-government participant in the United Nations (UN) Group of Governmental Experts (GGE) on Lethal Autonomous Systems (LAWS) to ensure the development of autonomous systems accord with ethical principles, the laws of armed conflict (LOAC) and in abidance with Article 36 weapons reviews.

Law and the Future of War, University of Queensland

The University of Queensland Law and the Future of War Research Group continues to lead research to develop and promote a better understanding of international law that governs the use of trusted autonomous systems (TAS) by the Australian Defence Organisation. It further aims to contribute to the development of law, policy and doctrine to ensure that Australia’s reliance on trusted autonomous systems satisfies both humanitarian imperatives and national security interests and is consistent with Australia’s commitment to upholding international law.

Ethics Uplift Program

From February 2021, the TASDCRC Ethics Uplift Program (EUP) will offer immediate and ongoing assistance to Centre participants through consultation, advice and policy development, supported by case analysis, education and enculturation.

The objectives of the program are to:

  • Raise the level of knowledge, skills and application of ethics.
  • Build enduring ethical capacity in Australian industry and universities to service Australian RAS-AI.
  • Educate in how to build ethical and legal autonomous systems.
  • Achieve ethical RAS-AI for TASDCRC Projects and
  • Support and contribute to the development of national policy.

The program will provide Australian industry access to the best of Australian theoretical and pragmatic expertise via universities and consultancies grounded in Defence-suitable methodologies and frameworks. The continued investment by TASDCRC with Defence and other participants is intended to accelerate and foster a sustainable capability for ethical and legal sovereign RAS-AI in Australia.

To express interest in provision of services to the EUP, contact Chief Scientist Dr Kate Devitt via info@tasdcrc.com.au

The Trusted Autonomous Systems submission to the Australian AI Action Plan

Trusted Autonomous Systems Directors of Autonomy Accreditation, (L-R) Mark Brady (Land), Rachel Horne (Maritime) and Tom Putland (Air) 

 

The Australian Government recognises that artificial intelligence (AI) will have enormous social and economic benefits for all Australians. The Department of Industry, Science, Energy and Resources (DISER) is consulting on the development of an Artificial Intelligence Action Plan to help maximise the benefits of AI for all Australians and manage the potential challenges. To that end DISER has developed a discussion paper and invited submissions to the AI Action Plan.

AI is the key underpinning of autonomous systems. The Trusted Autonomous Systems Defence CRC (TAS-DCRC) is uniquely placed to provide a submission, noting our depth of experience and leadership in this field, and our multi-disciplinary team including scientists, engineers, ethicists and philosophers, lawyers, and academics. We have a broad focus on many facets of Artificial Intelligence (AI), and our work covers ethical, assurance, technical, and practical perspectives.

Through our common good activities, specifically Activity 2: Assurance of Autonomy, we are actively working to provide a better understanding for Australians about the assurance and accreditation pathways for autonomous systems, in the maritime, land and air domains, and are working with the regulators and key stakeholders to improve those paths.

Our goal is to improve the innovation pipeline, making it faster to design, build, test, assure and certify autonomous systems, while maintaining warranted trust and safety. This work will unlock the many safety, environmental and efficiency benefits autonomous systems can bring, boost Australian jobs, and cement Queensland’s status as the ‘Smart Drone State’ of Australia.

We aim to attract more international participants to Australia to use our country’s world-class large-scale test ranges, and to generate business for the numerous AI entrepreneurs in Queensland and Australia more broadly.

Download the TAS-DCRC submission to the AI Action Plan here