Thinking Swarms: Artificial Agency, Teaming, Emergence & Governance – Call for Abstracts

Description 

The synthesis of swarms has evolved over 30 years from simple rule-based ‘artificial life’ simulations to reports of increasingly complex, cognitive, and numerous small autonomous platforms. Despite this progress, technical challenges and fundamental questions remain. We propose to take stock of progress, and seek answers to questions such as: How is agency and risk traded for the individual and the collective?  How might swarms use higher-order symbolic and semantic reasoning? How might human operators govern such systems, and ensure behaviours are bounded and ‘controlled’? How might swarms, teams of robots and hybrid swarm-teams allocate and coordinate action, distribute labour, observe, and manifest hierarchies, and self-monitor global properties and goals? What ethical, legal and safety frameworks and governance should ‘thinking’ swarms comply with? There is an opportunity to progress answers to these questions through presentations at invited workshops and subsequent production of a peer reviewed edited book. 

Image Copyright: https://unanimous.ai/two-new-patents/

Workshops 

A set of virtual and/or physical workshops Q1 2022 led by Trusted Autonomous Systems and UNSW Canberra. Each workshop session will run for 60 minutes, 3 x 10 minutes presentation + 30 min panel Q&A. Presenters and workshop themes to be determined through an EOI process with a workshop program announced by Fri 26 Nov 2021. Workshop presentations will be invited to submit a chapter of a peer reviewed edited book. 

Aim 

The aim of the workshops is to bring together a broad range of experts (including technical and engineering; cognitive science and artificial intelligence; ethical, legal, and safety) to consider technical challenges and fundamental questions about ‘thinking swarms’. 

Purpose 

The purpose of the workshops is to co-contribute content to shape an Academic book ‘Thinking Swarms: Agency, teaming, emergence and governance’ 

Outcomes 

  1. Build a ‘Thinking Swarms’ Community of Practice 
  2. Develop a conceptual framework, themes, and topics to frame the Thinking Swarms edited book 
  3. Invite a set of authors to write chapters for the Thinking Swarms edited book 

Topics 

  • Agency 
  • Artificial Intelligence 
  • Bi-directional cognitive awareness 
  • Contextual awareness 
  • Distributed Artificial Intelligence 
  • Emergence 
  • Ethics of AI 
  • Governance and control  
  • Verification  
  • Human Swarm Teaming 
  • Human performance 
  • Interpretability 
  • Legal Issues 
  • Machine education 
  • Maintainability 
  • Metrics 
  • Multi-agent systems 
  • Novelty 
  • Ontology 
  • Predictability 
  • Relationships 
  • Reliability 
  • Risks 
  • Safety 
  • Security 
  • Swarm Guidance 
  • Swarms 
  • Transparency 
  • Trust 
  • Viability 

 

Contact 

If you are interested in presenting at the workshops, please email presentation title, 150 word abstract and 100-word bio to the organisers by Fri 29 Oct:  

Kate Devitt, Kate.Devitt@tasdcrc.com.au

Jason Scholz, Jason.Scholz@tasdcrc.com.au

Simon Ng, Simon.Ng@tasdcrc.com.au 

Hussein Abbass, H.Abbass@unsw.edu.au 

Enabling COLREGs Compliance for Autonomous & Remotely Operated Vessels

By Robert Dickie1and Rachel Horne2

1Group Leader, Systems Safety & Assurance, Frazer Nash Consultancy Ltd 

2Assurance of Autonomy Activity Lead, Trusted Autonomous Systems (TAS)

Autonomous vessels of various sizes, forms and speeds are already at sea, on the surface and beneath it. The International Regulations for Preventing Collisions at Sea (COLREGs), published by the International Maritime Organization (IMO) in 1960 and updated in 1972, govern the ‘rules of the road’ at sea. COLREGs describe the features that vessels must have to facilitate being seen and identified, define means of communication between vessels for the purposes of signaling intent, and most importantly they describe the navigational behaviors expected of vessels in proximity to one another, for the purposes of avoiding collision. It’s clear from the terms and phrases used in COLREGs that the authors didn’t conceive of navigational or operational decisions being made by computers, and compliance for autonomous vessels is difficult and not well-understood.

Autonomous systems technology does not replicate humans, it emulates some of their skills using a different set of ‘senses’ and decision-making processes and brings new capabilities to operations. This means that humans and autonomous technologies have different weaknesses, strengths, risks and mitigations.

The autonomous maritime industry has been wrestling with the challenge of ‘compliance’ with COLREGs for years, in terms of both understanding how it applies, and how to demonstrate compliance. The challenge for the designer or operator of an autonomous vessel is that the regulations are phrased from the underlying assumption that a human is operating the vessel. Where an autonomous control system is performing some or all of the functions a human previously would have been, it can be difficult to work out what constitutes ‘compliance’, in a practical sense, and in a way that the regulator, the Australian Maritime Safety Authority (AMSA), would accept. This difficulty can lead to additional costs, delay, and operations which are subject to more limitations than may be reasonable based on the actual risks presented.

Developing one-off COLREGs compliance cases for a single autonomous vessel is onerous for the designer or operator, and also causes AMSA difficulty in terms of the resources required to assess the compliance case, and ensure consistency in regulatory decision making. There are significant efficiencies to be gained for designers, operators, and regulators of autonomous vessels in the development of a repeatable compliance framework designed to reduce these burdens.

New TAS project to address COLREGs challenge

The TAS Assurance of Autonomy team have commenced a new project aimed at addressing the COLREGs challenge by developing an enabling framework which supports a practical and appropriate level of compliance for autonomous vessels. The project, known as the ‘TAS COLREGs project’, will consider the specific risks posed by the full spectrum of autonomous vessels, thus allowing flexibility in the operator’s approach to compliance. In order to provide maximum usefulness and future proofing, the approach will be to offer a range of risk mitigation options which address the ‘spirit’ of COLREGs, rather than defining a set of performance requirements or specification with which all autonomous vessels much comply.

 

Figure 1: Guiding principles for the TAS COLREGs project

The guiding principles for the TAS COLREGs project (see Figure 1) are:

  • The compliance methodology will be repeatable and scalable across a broad range of autonomous vessels.
  • The underpinning philosophy of the compliance methodology will be logical, reasoned and justified by argument.
  • The compliance methodology will be enabling rather than constraining whilst upholding the purpose and spirit of COLREGs.
  • The process followed to use the compliance methodology will be simple to follow and supported by guidance.
  • The project aims to develop a methodology which can be agreed to by the regulator.

Outputs of the TAS COLREGs project includes an operational tool and user guidance.

Stakeholder engagement will occur over the coming weeks to ensure the tool is fit for purpose and considered acceptable and appropriate by AMSA. This tool will be trialed with a variety of autonomous vessels in 2021, and then refined as necessary, before being released.

The TAS COLREGs project is being delivered by Frazer-Nash Consultancy and led by Robert Dickie, with the support of Marceline Overduin and Andrejs Jaudzems. Robert has extensive experience in the assurance of maritime autonomous systems, having been a member of the UK Maritime Autonomous Systems Regulatory Working Group (MASRWG) and having developed the initial draft of the Lloyd’s Register Code for Un-crewed Marine Systems.

Robert Dickie, Frazer-Nash Consultancy

The output of the TAS COLREGs project will be available on the TAS website as a useful resource for designers and operators in the form of a repeatable COLREGs compliance framework, supported by a tool and user companion guide.

The COLREGs resource will also be included in the Body of Knowledge Toolbox being developed by the TAS Assurance of Autonomy team under the National Accreditation Support Facility Pathfinder Project (NASF-P), which will assist designers, operators, regulators, and other stakeholders in the Australian autonomous systems ecosystem to navigate the assurance and accreditation process for autonomous systems efficiently and successfully.

If you would like to contact us in relation to the TAS COLREGs project, to offer feedback, suggestions, or request more information, please email us at NASFP@tasdcrc.com.au.

TAS researchers contribute to new publication – Autonomous Cyber Capabilities under International Law

Trusted Autonomous Systems are proud to promote the work of our Ethics and Law of Autonomous Systems researchers in the publication of ‘Autonomous Cyber Capabilities under International Law’.

The Activities Legal lead, Professor Rain Liivoja (University of Queensland – Law and the Future of War) is a co-editor, and it features contributions from Activity researchers Dr Tim McFarland (University of Queensland – Law and the Future of War) and Mr Damian Copeland (IWR) co-authoring with Mr Julian Tattersall.

in addition to many other respected contributors. The book was launched at a recent NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) event with the co-editor Ms Ann Väljataga, a Law Researcher at NATO CCDCOE.

Launch Event @ NATO CCDCOE

The publication explores the international law aspects associated with autonomous cyber capabilities and their potential impact to develop a deeper understanding of their implications. The volume is an expansion of a 2019 research paper.

The publication is available here.

Congratulations to Rain and the Team!

Outcomes of successful webinar on TAS’s project to develop an Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous and Remotely Operated Vessels in 2021

By Maaike Vanderkooi and Rachel Horne

The Webinar

As part of Trusted Autonomous Systems (TAS) project to develop an Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous and Remotely Operated Vessels, TAS held a Webinar on the project on Monday 5 July 2021. 50 stakeholders attended the Webinar, including a mix of small, medium and large industry members, accredited marine surveyors, class societies, and universities.

During the Webinar, we presented the key findings of our review of the currently available codes of practice and standards for autonomous and remotely operated vessels, including our review of the:

Informed by these findings, we then outlined proposed principles to underpin the Australian Code of Practice, and provided information on the proposed direction of the Australian Code of Practice in key areas.

The presentation provided at the Webinar, including a summary of our findings, is available here TAS Code of Practice Presentation.

Codes in Australian context, part of the linked presentation.

Attendees also had an opportunity to ask questions at the end of the presentation. A summary of the questions asked and answers provided is available here TAS Code of Practice Webinar – Q and A.

Upcoming online workshops

Online workshops will be held in late July on draft content for the Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous and Remotely Operated Vessels. These workshops will provide an opportunity for stakeholders to contribute to the development of the Code by providing feedback on the draft content as well as more general input and ideas. It is essential that the Code is applicable and useful for the Australian maritime autonomous systems industry, and TAS is working closely with stakeholders, including AMSA, to achieve this vision.

Each workshop is directed towards a particular stakeholder group – see the list below – but you may register for any of the workshops.

Please only register for one workshop – they will all cover the same material. We are keeping each workshop to a manageable number of participants to ensure there is plenty of opportunity for everyone to provide their feedback and input.

New Workshops added due to demand (all stakeholders welcome):

Workshop 6:  Wednesday 28 July, 10.30pm – 12pm (New)

Register here:  https://www.eventbrite.com.au/e/162624996743

Workshop 7:  Wednesday 28 July, 1pm – 2:30pm (New)

Register here: https://www.eventbrite.com.au/e/162626051899

Places remaining:

Workshop 4:  Monday 26 July, 10am – 11.30am (Limited positions remaining)

Workshop 4 is primarily aimed at accredited marine surveyors and Class Societies, but is open to all interested stakeholders.

Register here: https://www.eventbrite.com.au/e/161043765237

Workshop 5:  Monday 26 July, 12.30pm – 2pm (Limited Positions remaining)

Workshop 5 is primarily aimed at creators of autonomous technology, ship and equipment builders, and vessel operators, but open to all interested stakeholders.

Register here: https://www.eventbrite.com.au/e/161042022023

Sold out:

Workshop 1:  Thursday 22 July, 12.30pm – 2pm

Workshop 1 is primarily aimed at creators of autonomous technology, ship and equipment builders, and vessel operators, but is open to all interested stakeholders. Full

Workshop 2:  Friday 23 July, 10am – 11.30pm

Workshop 2 is primarily aimed at Defence, Defence industry and government stakeholders, but is open to all interested stakeholders. Full

Workshop 3: Friday 23 July, 12.30pm – 2pm

Workshop 3 is primarily aimed at accredited marine surveyors and Class Societies, but is open to all interested stakeholders. Full

If you would like to find out more about our work, or provide feedback on where you see the key risks and opportunities for the autonomous systems industry in Australia, please contact us as NASFP@tasdcrc.com.au

New TAS project to develop a Detect and Avoid (DAA) Design, Test and Evaluation (DT&E) standard for low-risk, uncontrolled airspace outside the airport environment

By Tom Putland – Director of Autonomy Accreditation – Air

The development of best practice policy, appropriate standards, and a strong assurance and accreditation culture has the potential to enhance innovation and support market growth for drones with autonomous abilities in the maritime, air and land domains.

The Trusted Autonomous Systems  (TAS) National Accreditation Support Facility Pathfinder Project (NASF-P), under the Assurance of Autonomy Activity, represents an opportunity to unlock Queensland’s, and by extension Australia’s, capacity for translating autonomous system innovation into deployments, given the existing test facilities operating in Queensland, the existing industry need identified and strong government backing.

The overarching purpose of the NASF-P is to:

  • Make it easier to design, build, test, assure, accredit, and operate an autonomous system in Australia, without compromising safety; and
  • Support and promote Queensland’s existing test ranges; and
  • Encourage both domestic and international business to operate in, and use Queensland as a base for the purpose of testing, assuring, and accrediting autonomous systems; and
  • Investigate, design, and facilitate the creation of an appropriate, independent third party entity that can continue to support the design, build, test, assurance, accreditation, and operation of autonomous system in Australia, by bridging the gap between industry, operators, and regulators.

New project: Development of a DAA DT&E standard

In the air domain specifically, the largest impediment to the integration of unmanned aircraft (regardless of autonomy) into the National Airspace System (NAS) is complying with the intent[1] of the See and Avoid (SAA) requirements detailed within Regulations 163 and 163A of the Civil Aviation Regulations 1988 (Cth), particularly in Beyond Visual Line of Sight (BVLOS) operations. In lieu of a solution, operators and the regulator must go through a labour-intensive stakeholder engagement process with all aviation parties to prevent mid-air collisions. This approach will not scale to the projected numbers of Unmanned Aerial Systems (UAS) operations into the future.

An autonomous/highly automated DAA system that complies with the safety objectives of CAR 163 and CAR 163A is a key enabling technology for integration into complex Australian airspace and will form an integral part of the safety assurance framework for UAS operations into the future.

The TAS team have initiated a new project, led by Revolution Aerospace, to develop a new Detect and Avoid (DAA) Design, Test and Evaluation (DT&E) standard for low-risk, uncontrolled airspace outside the airport environment. This is particularly relevant to Australian unmanned aircraft operations.

Dr Terry Martin (CEO) and the Revolution Aerospace team have a wealth of world-leading experience in Detect and Avoid, Machine Learning, Safety Assurance, Verification & Validation (V&V), and Test & Evaluation (T&E), representing Australia at international forums such as NATO, JARUS, and RTCA (to name a few).

Dr Terrence Martin, CEO of Revolution Aerospace

Terry is supported in this project by TAS’s Tom Putland – Director of Autonomy Assurance in the Air Domain. Tom has extensive experience in the regulation and safety assurance of UAS, having previously worked for CASA, and being CASA’s representative at JARUS and other international working groups. This team has the expertise, drive, and ability to solve this critical airspace problem for Australia.

Tom Putland, Director of Autonomy Assurance – Air at TAS

DT&E standard will create a process acceptable to CASA[2] that allows:

  • the derivation of high-level safety objectives; and
  • development of Verification and Validation (V&V) requirements; and
  • the conduct of relevant simulations and tests to demonstrate compliance with the safety objectives; and
  • the collation of compliance process and data into a package to support regulators (e.g. CASA) in issuing an approval for the operation.

TAS will engage closely with key stakeholders, including CASA, the Australian Association for Unmanned Systems (AAUS) and other industry members to ensure the process reflects current best practice, and is appropriate and useful for the Australian aviation industry. The intent is for the new process to be available for testing by the end of the year.

Upon completion of this project a subsequent project will be undertaken to work with industry partners and the Queensland Flight Test Range, Australia’s first and only commercial flight test range, to utilise and comply with this standard. Completion of these projects will increase the access and flexibility available for unmanned aircraft operations in Australian airspace.

If you are interested in learning more about these projects, be involved as an industry test partner, or discuss specifics (i.e. EO/IR sensing, classifiers/learning assurance, alerting and avoidance logic, formal verification, T&E), please email tom.putland@tasdcrc.com.au.

Other NASF-P projects underway

The NASF-P team have a number of projects underway, including:

  • Preparation of a Body of Knowledge on the assurance and accreditation of autonomous systems;
  • Maritime domain: development of a repeatable, regulator-accepted methodology to demonstrate compliance with COLREGS for autonomous and remotely operated vessels; and
  • Preparation of a business case for a new, independent, National Accreditation Support Facility, based in Queensland, that will better connect operators and regulators to facilitate more efficient assurance and accreditation.

If you would like to find out more about our work, or provide feedback on where you see the key risks and opportunities for the autonomous systems industry in Australia, please contact us as NASFP@tasdcrc.com.au.

 

[1] Through an approval under Subregulation 101.073(2) and Regulation 101.029 of the Civil Aviation Safety Regulations 1998 (Cth).

[2] Regular engagement with CASA will assist to ensure the final process is acceptable to them, but note it has not been approved  at this point in time. In the future, the process is intended to form part of an acceptable means of compliance for a BVLOS approval.

Video series – Introduction to ethical robotics, autonomous systems and artificial intelligence (RAS-AI) in Defence and pragmatic tools to manage these risks.

by Dr Kate Devitt, Chief Scientist, Trusted Autonomous Systems

 

“Military ethics should be considered as a core competency that needs to be updated and refreshed if it is to be maintained”

Inspector General ADF, 2020, p.508

The Centre for Defence Leadership & Ethics Australian Defence College has commissioned Trusted Autonomous Systems to produce videos and discussion prompts on the ethics of robotics, autonomous systems and artificial intelligence.

These videos for Defence members are intended to build knowledge of theoretical frameworks relevant to potential uses of RAS-AI to improve ethical decision-making and autonomy across warfighting and rear-echelon contexts in Defence.

Major General Mick Ryan says that he can “foresee a day where instead of having one autonomous system for ten or a hundred people in the ADF will have a ratio, that’s the opposite. We might have a hundred or a thousand for every person in the ADF”.

He asks, “how do we team robotics, autonomous systems and artificial intelligence (RAS-AI) with people in a way that makes us more likely to be successful in missions, from warfighting through to humanitarian assistance, disaster relief; and do it in a way that accords with the values of Australia and our institutional values?”

Wing Commander Michael Gan says, “robotics and autonomous systems have a great deal of utility: They can reduce casualties, reduce risk, they can be operated in areas that may be radioactive or unsafe for personnel. They can also use their capabilities to go through large amounts of data and be effective or respond very quickly to rapidly emerging threats”.

He goes on to say “however, because a lot of this is using some sort of autonomous reasoning to make decisions, we have to make sure that we have a connection with the decisions that are being made, whether it is in the building phase, whether it is in the training phase, whether it is in the data, which underpins the artificial intelligence, robotic autonomous systems”.

Trusted Autonomous Systems CEO, Professor Jason Scholz points out that “Defence has a set of behaviours about acting with purpose for defence and the nation; being adaptable, innovative, and agile; be collaborative and team-focused; and to be accountable and trustworthy to reflect, learn and improve; and to be inclusive and value others. All of these values and behaviours are included whether we are a ‘robotic and autonomous systems’ augmented force, or not”.

Managing Director of Athena Artificial Intelligence Mr Stephen Bornstein says, “When it comes to RAS-AI in Defence and ethics associated with them…. it’s very important to consider how a given company or a given AI supplier is establishing trust in that RAS-AI product”. He says that “ultimately, that assurance should be the most important thing before we start giving technologies to soldiers, seamen, or aircrew”.

Personnel engaging with the content should gain a clearer idea how to reflect on ethical issues that affect human and RAS-AI decision making in defence contexts of use including the limits and affordances of human and technologies to enhance ethical decision-making, as well as frameworks to help with RAS-AI development, evaluation, acquisition, deployment and review in Defence.

The videos draw on a framework from The Defence Science & Technology report ‘A Method for Ethical AI in Defence’ to help Defence operators, commanders, testers or designers ask five key questions about the technologies they’re working with.

  1. Responsibility – who is responsible for AI?
  2. Governance – how is AI controlled?
  3. Trust – how can AI be trusted?
  4. Law – how can AI be used lawfully?
  5. Traceability – how are the actions of AI recorded?

The videos consider four tools that may assist in identifying, managing and mitigating ethical risks in Defence AI systems.

https://theodi.org/article/data-ethics-canvas/

The ‘Data Ethics Canvas’ by the Open Data Institute encourages you to ask important questions about projects that use data and reflect on the responses. Such as the security and privacy of data collected and used, who could be negatively affected and how to minimise negative impacts.

The AI Ethics Checklist ensures AI developers know: the military context the AI is for, the sorts of decisions being made, how to create the right scenarios, and how to employ the appropriate subject-matter experts, to evaluate, verify and validate the AI.

The Ethical AI Risk Matrix is a project risk management tool to identify and describe identified risks and proposed treatment. The matrix assigns individuals and groups to be responsible for reducing ethical risk through concrete actions on an agreed timeline and review schedule.

International Weapons Review delivering Law for AI Basics workshops, part of the TAS Ethics Uplift Program

Article 36 of Additional Protocol 1 (1977) of the Geneva Convention (1949) requires:

“In the study, development, acquisition or adoption of a new weapon, means or

method of warfare, a High Contracting Party is under an obligation to determine

whether its employment would, in some or all circumstances, be prohibited by this

Protocol or by any other rule of international law applicable to the High

Contracting Party”.

The rise of robotics Robotic, Autonomous Systems and Artificial Intelligence (RAS-AI) requires new methods to ensure compliance with the requirements of an Article 36 review of all new weapons, means or methods of warfare.

On 17 May 2021, Trusted Autonomous Systems (TAS) hosted International Weapons Review (IWR) ‘Law for AI Basics’ course for TAS participants and associated research personnel. IWR’s legal experts introduced international and domestic legal issues relevant to the design and acquisition of AI systems for use by Defence in Australia and identified legal inputs to ethical AI design in Defence.

The workshop covered Australian legal and ethical compliance requirements for Trusted Autonomous Systems. The Article 36 Review processes and issues relevant to autonomous systems, five facets of Ethical AI in Defence (responsibility, governance, trust, law and traceability) and requirements of the Legal and Ethical Assurance Program Plan (LEAPP). Workshops are available to stakeholders of Australian Defence including Defence Industries, Government, Universities, ADF and Defence.

Human machine teaming with RAS-AI will be a key ADF capability in the future. RAS-AI may increase safety for personnel, removing them from high-threat environments; increase the fidelity and speed of human awareness and decision-making; and reduce the cost and risk to manned platforms.

The development and RAS-AI investment must be informed by ethical and legal considerations and constraints. To achieve this, in February 2021, TAS commenced the Ethics Uplift Program (EUP) to provide immediate and ongoing assistance to TAS participants through consultation, advice and policy development, supported by case analysis, education and enculturation.

The training is designed to enable participants to understand, analyse and evaluate legal issues and risks that are relevant to the design and development of trusted autonomous systems, using case studies. This introductory course is aimed at technical staff responsible for design and development of AI systems and managers responsible for oversight of technical staff.  IWR, led by Dr Lauren Sander and Mr Damian Copeland, offers unique expertise in international law relevant to the development of new weapons, means and methods of warfare, including Article 36 weapon review requirements.

Dr Lauren Sanders is a legal practitioner whose doctoral studies were in international criminal law accountability measures, and whose expertise is in the practice of international humanitarian law including advising on the accreditation and use of new and novel weapons technology. She has over twenty years of military experience and has advised the ADF on the laws applicable to military operations in Iraq and Afghanistan and domestic terrorism operations.

Damian Copeland is a legal practitioner whose expertise and doctoral studies are in the Article 36 legal review of weapons, specifically focused on weapons and systems enhanced by Artificial Intelligence.  He is a weapons law expert with over twenty-five years military service, including multiple operational deployments where he has extensive experience in the application of operational law in support of ADF operations.

Learn more about the range of IWR services at https://internationalweaponsreview.com/

New TAS project to develop an Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous and Remotely Operated Vessels in 2021

By Rachel Horne – Assurance of Autonomy Lead/Director of Autonomy Accreditation – Maritime

Autonomous systems technology offers the ability to increase safety and efficiency, while lowering economic and environmental cost. While some level of autonomy has been seen in commercial products for a number of years, for example the basic thermostat or the Roomba, in the last five years there has been a rapid acceleration in the capacity and availability of unmanned aerial vehicles known as drones, and in uncrewed surface and sub-surface vessels, also called autonomous vessels.

For this rapid acceleration to continue, and to ensure this technology can integrate into commercial and defence operations, autonomous systems need to be trusted by the government, regulators, operators, and the broader community. An integral part of gaining trust is having a clear, well-tailored regulatory framework, consistent assurance requirements and agreed assurance methodology, and support from the regulator. These same factors also facilitate innovation and promote growth in industry by providing certainty.

Coral AUV. Image by AIMS

New project: Development of an Australian Code of Practice

The NASF-P (National Accreditation Support Facility Pathfinder) team have commenced a number of new projects to address the challenges outlined above. One of these projects is aimed at addressing the lack of tailored standards for autonomous and remotely operated vessels by developing an Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous and Remotely Operated Vessels. This Code will represent best practice, and is intended to provide certainty for industry by providing a set of regulator-acknowledged standards that they can use to design, construct, survey and operate autonomous and remotely operated vessels. The Code of Practice will be voluntary, and will be updated periodically.

This project, led by Maaike Vanderkooi on behalf of TAS, will begin with a review of available Codes of Practice and Standards, for example the UK Maritime Autonomous Surface Ships (MASS) UK Industry Conduct Principles and Code of Practice, and Lloyd’s Register Unmanned Marine Systems Code. The project will then develop a draft Australian Code of Practice, using input from key stakeholders, which will then be released for broader public consultation. The intent is to release a draft Code of Practice by October 2021, which will be available for use by industry and the regulator.

Maaike Vanderkooi has been chosen to lead the project as a result of her extensive experience in developing regulatory frameworks in the maritime, heavy vehicle and ports arenas, and her experience in developing, reviewing and impact assessing commercial vessel standards.

Maaike Vanderkooi

TAS will engage closely with key stakeholders, including the Australian Maritime Safety Authority (AMSA), the Australian Association for Unmanned Systems (AAUS) Maritime Working Group, the Marine Surveyors Association Inc, and the Australasian Institute of Marine Surveyors, throughout this project to ensure the Code of Practice is practical and appropriate for use by Australian industry and the regulator. There will also be opportunities for input by interested parties throughout the project.

Engagement opportunities

  • We are looking for people with direct experience applying current Codes of Practice or Standards to autonomous and remotely operated vessels, to discuss their experience and provide feedback to us in May 2021;
  • We will hold a series of workshops with key stakeholders between May and August 2021; and
  • We will release the draft Code of Practice for public consultation in August 2021, and welcome all thoughts and feedback.

If you would like to contact us in relation to this project, to offer feedback, suggestions, or your assistance, please email us at NASFP@tasdcrc.com.au.

QUT WAM-V in operation at AIMS. Image by AIMS

Other NASF-P projects underway

The NASF-P team have a number of projects underway, including:

  • Preparation of a Body of Knowledge on the assurance and accreditation of autonomous systems;
  • Air domain: development of an end-to-end acceptable process for the design, build, test and evaluation of autonomous detect and avoid (DAA) systems for certain types of airspace;
  • Maritime domain: development of a repeatable, regulator-accepted methodology to demonstrate compliance with COLREGS for autonomous and remotely operated vessels; and
  • Preparation of a business case for a new, independent, National Accreditation Support Facility, based in Queensland, that will better connect operators and regulators to facilitate more efficient assurance and accreditation.

The NASF-P team recently worked with Queensland AI Hub, Australian Institute of Marine Science, and AMC Search, supported by Advance Queensland, to deliver a world-first pilot course ‘Autonomous Marine Systems Fundamentals for Marine Surveyors’. This course, which was created to address the gap in experience with autonomous marine systems amongst the accredited marine surveyor community, had nine participants from around Queensland.

Participants of the pilot course at AIMS, March 2021. Image by TAS

If you would like to find out more about our work, or provide feedback on where you see the key risks and opportunities for the autonomous systems industry in Australia, please contact us as NASFP@tasdcrc.com.au

TAS Research Fellows (three of the four) featured in a University of Queensland Blog

Read about the work of three of the Trusted Autonomous Systems Research Fellows here.

A Method for Ethical AI in Defence

Today the Australian Department of Defence released ‘A Method for Ethical AI in Defence’ an outcome of a workshop in 2019 attended by over 100 representatives from Defence, other Australian government agencies, industry, academia, international organisations and media. The workshop was facilitated by Defence Science & Technology Group, RAAF Plan Jericho and Trusted Autonomous Systems Defence Cooperative Research Centre. Defence note that the report outlines a pragmatic ethical methodology for communication between software engineers, integrators and operators during the development and operation of Artificial Intelligence (AI) projects in Defence.

Trusted Autonomous Systems CEO Professor Jason Scholz said ”Trusted Autonomous Systems are very pleased to partner with Defence on this critical issue of ethics in AI. Ethics is a fundamental consideration across the game-changing Projects that TAS are bringing together with Defence, Industry and Research Institutions.”

AI and human machine teaming will be a key capability in the future of Australian Defence systems. Chief Defence Scientist Tanya Monro notes “…AI technologies offer many benefits such as saving lives by removing humans from high-threat environments and improving Australian advantage by providing more in-depth and faster situational awareness”.

Air Vice-Marshal Cath Roberts, Head of Air Force Capability said “artificial intelligence and human-machine teaming will play a pivotal role for air and space power into the future… We need to ensure that ethical, moral and legal issues are resolved at the same pace as the technology is developed. This paper is useful in suggesting consideration of ethical issues that may arise to ensure responsibility for AI systems within traceable systems of control”. These comments are equally important to the other service arms.

In 2019, the Trusted Autonomous Systems Defence CRC (TASDCRC) commenced a six-year Programme on the Ethics and Law of Trusted Autonomous Systems valued at $9M. Over the past two years the activity has conducted workshops, engagements and consultation with participants and stakeholders of the Centre, contributing to ADF strategy, producing diverse publications and influencing the design of trusted autonomous systems such as the game-changing Athena AI ethical and legal decision support system.

From 2021 the Ethics Uplift Program (EUP) of the TASDCRC will offer ongoing assistance to Centre participants through consultation, advice and policy development, supported by case analysis, education and enculturation

The Trusted Autonomous Systems affiliate researchers and employees participate in a wide range of events in consideration of the ethics and law of RAS-AI such as  ICRC, UNIDIR SIPRI, and NATO.

TASDCRC is a non-government participant in the United Nations (UN) Group of Governmental Experts (GGE) on Lethal Autonomous Systems (LAWS) to ensure the development of autonomous systems accord with ethical principles, the laws of armed conflict (LOAC) and in abidance with Article 36 weapons reviews.

The Defence Media Release reinforced that “The ethics of AI and autonomous systems is an ongoing priority and Defence is committed to developing, communicating, applying and evolving ethical AI frameworks”. Trusted Autonomous Systems are a partner to Defence on that journey. More details at https://www.dst.defence.gov.au/publication/method-ethical-ai-defence