How to Plan a Successful Autonomous Systems Demonstration

Australian Droid & Robot demonstrating at EPE MILTECS Facility at CSIRO and Queensland Defence Science Alliance ‘Human Teaming and Response Robotics Standardised Testing, Evaluation and Certification Interactive Showcase’ Pullenvale 21 April 2022

Australian Droid & Robot demonstrating at EPE MILTECS Facility at CSIRO and Queensland Defence Science Alliance ‘Human Teaming and Response Robotics Standardised Testing, Evaluation and Certification Interactive Showcase’ Pullenvale 21 April 2022

Demonstrating the viability of artificial intelligence (AI) requires thoughtful construction and communication of both social and technical aspects.

Demonstrations are both scientific experiments and social events designed around achieving buy-in for the technology. When it comes to AI, demonstrators face the challenge of conveying the smarts of the system and the role of human intent.

Two challenges of demonstrating the potential of AI are:

  • Performing decision-making: working out how best to show cognitive and social decision making through complex autonomy demonstrations in a way that makes intelligent performance and errors understandable
  • Showcasing ability to abide by human values and intent: working out how best to design a demonstration that shows the ability of a system to abide by decision making norms, including commander’s intent, military objectives, and ethical, legal, and safety-focused frameworks

Demonstrations are performances that include social and technical elements, such as: actors (humans, UI, software, networked systems and machines); enabling technologies (mechanics, comms, screens, instructions, consoles, connectivity, controls, batteries/fuel, security, lights, tents, generators); rituals (cultural behaviours, safety processes, signals); a choreographed narrative (CONOPS, narrative flow including objectives, cause and effect, conflict and emotions); planned, opportunistic and incidental interactions (stories, networking); evaluative criteria (expectations, key performance indicators); and goals (training/education, buy-in, positive emotions, media and communication outcomes, future investments, lessons learnt).

The TAS Ethics Uplift Program is conducting a research project that will develop and test an Autonomous Systems Demonstration Canvas to help optimise human understanding and buy-in for technology developers and investors. The Canvas will help tackle the opacity problem facing AI, for instance the way AI can obfuscate the rules, reasoning, and justifications underlying human-machine decision making as it occurs.

The first iteration of the ‘Autonomous Systems Demonstration Canvas’ V.1.1 (ASDC) was launched at the Queensland Defence Science Alliance (QDSA) Human Teaming and Response Robotics Standardised Testing, Evaluation and Certification Interactive Showcase at CSIRO’s Queensland Centre for Advanced Technologies’ MILTECS facilities on 21 April 2022.

The Canvas provides a scaffolding framework to help technology developers efficiently prepare, practise, and perform impactful technology demonstrations to diverse stakeholders to achieve diverse goals including attracting investment, showing technical progress, and getting publicity.

The intention is for the Canvas to inform future trials as a tool to support impactful demonstration planning for TAS programs. For more information on the Autonomous Systems Demonstration Canvas contact Dr Kate Devitt kate.devitt@tasdcrc.com.au.

Release of the Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous & Remotely Operated Vessels

By Rachel Horne, Assurance of Autonomy Activity Lead, TAS

Trusted Autonomous Systems (TAS) has released the Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous & Remotely Operated Vessels, Edition 1 (‘Australian Code of Practice’).

The Australian Code of Practice provides a best practice standard tailored for autonomous and remotely operated vessels operating in Australia. It is a voluntary standard, developed in close consultation with the Australian Maritime Safety Authority (AMSA), intended to be used to support assurance, accreditation, and safe operations.

Access the Australian Code of Practice and Guidance Materials via the TAS website.

The development of the Australian Code of Practice was informed by an analysis of existing, publicly available codes and guidelines for autonomous and remotely operated vessels[1],  significant stakeholder engagement, and public consultation.

TAS, supported by Australian Maritime College Search, have also developed a suite of Guidance Materials to support the use of the Australian Code of Practice. These Guidance Materials explain how the Australian Code of Practice fits into the existing Australian maritime regulatory framework, how to use the Code, and what the requirements are for each category of vessel, together with providing examples and suggestions on where to access further information.

TAS encourages owners, operators, surveyors, regulators and other users of the Australian Code of Practice and Guidance Materials to provide feedback to TAS, to help inform future iterations.

Where can I get more information?

To access more information on the Australian Code of Practice, you can:

 

TAS would like to thank all parties who contributed to the development of the Australian Code of Practice, including particularly Maaike Vanderkooi of Vanderkooi Consulting who led the development of the Code on TAS’s behalf, Rob Dickie of Frazer Nash Consultancy who led the COLREGs project on TAS’s behalf, together with his team Marceline Overduin and Andrejs Jaudzems, and Chris White from AMC Search who led delivery of the Guidance Materials, together with his team Reuben Kent, Damien Guihen, and Nick Bonser.

This project received funding support from the Queensland Government through Trusted Autonomous Systems (TAS), a Defence Cooperative Research Centre funded through the Commonwealth Next Generation Technologies Fund and the Queensland Government.


[1] UK Code of Practice for Maritime Autonomous Surface Ships, the LR Code for Unmanned Marine Systems, and DNV GL’s Autonomous and Remotely-operated Ships Class Guideline

TAS and Participants looking forward to IndoPac 22

As Indo Pacific 2022 commences next Tuesday, Trusted Autonomous Systems (TAS) is pleased to announce presentations and two new Centre projects, established in partnership with the Royal Australian Navy (RAN), Warfare Innovation Navy (WIN) Branch.

TAS Presentations at IndoPac

TAS CEO Professor Jason Scholz will be presenting ‘Accelerating maritime trusted autonomous systems’, 1600 Tuesday 10 May at the International Maritime Conference.

TAS Assurance of Autonomy lead – Maritime, Rachel Horne will be presenting on ‘An Australian code of practice for autonomous and remotely operated vessels’, 0900 Wednesday 11 May at the International Maritime Conference, and ‘Improving the Regulatory Experience for Autonomous and Remotely Operated Vessels: TAS Regulatory Project Overview’, 1255 Wednesday 11 May at Autonomy in the Maritime Domain.

Robert Dickie of Frazer-Nash will be presenting on the TAS Assurance of Autonomy COLREGs project, An enabling framework to support COLREGS compliance for autonomous and remotely operated vessels’, 0930 Wednesday 11 May at the International Maritime Conference.

New Projects!

Ocius – iDrogue

Through disruptive innovation, Warfare Innovation Navy (WIN) Branch enables the Royal Australian Navy to be at the forefront of asymmetric warfighting for joint integrated effects. The iDrogue project, initiated by Trusted Autonomous Systems, led by Ocius Technology, and funded by WIN Branch, was established to develop and demonstrate a novel Autonomous Underwater Vessel (AUV) launch and recovery system. Ocius, a leading Australian innovator, is partnered with the Australian Maritime College and University of New South Wales on this exciting project. This pilot project is being conducted over 12-months, through 2022.

  • The ultimate aim, with further funding, is to develop an intelligent robot based on biomimicry that can launch and recover ‘any AUV, from any platform in virtually any sea state’
  • AUVs are in increasing use by modern navies. The current method of launching and recovering AUVs is undertaken by humans at the sea surface level.
  • This pilot program will exploit advanced robotics and autonomy to undertake functions at calm depth and without human involvement. In the next 6-months the iDrogue will be automated and the design reviewed.

____________

  • This project contributes to RAN sea superiority with a capability that integrates with current and future fleets and allied capabilities.
  • The graphics on the stand represent human machine teaming and human control.
  • It is an industry led (Ocius) project funded by WIN Branch and overseen through Trusted Autonomous Systems.
  • Ocius partners include AMC Search, UNSW and Southern Ocean Subsea (SOSUB).
  • WIN – Through disruptive innovation, Warfare Innovation Navy (WIN) Branch enables the Royal Australian Navy to be at the forefront of asymmetric warfighting for joint integrated effects.
  • Ocius Technology have developed a range of uncrewed platforms. More on their range is available here.
  • Trusted Autonomous Systems (TAS) were established though the Next Generation Technologies Fund (NGTF) to accelerate autonomous systems development for Defence. The TAS vision is ‘smart, small & many’ and projects cover all domains.

Dr Robert Dane of Ocius Technology will be presenting ‘Persistent autonomous data gathering and monitoring – a future vision for autonomous ocean observations’, 1030, Wednesday 12 May at the International Maritime Conference.

Ocius and a prototype iDrogue are represented with TAS on the Royal Australian Navy Capability – Autonomous Systems stand 3J24. Ocius will also have a presence on additional stands.

Austal – Patrol Boat Autonomy Trial

The Patrol Boat Autonomy Trial led by Austal on behalf of the Royal Australian Navy, Warfare Innovation Navy WIN Branch will establish robotic, automated and autonomous elements on a decommissioned Patrol Boat. This will provide a proof-of-concept demonstrator for optionally crewed or autonomous operation and explore the legal, regulatory pathways and requirements.

Austal are uniquely placed to undertake this project being the original designers and builders of the Armidale-class vessels. Austal has partnered with L3 Harris on the project. This project presents a significant opportunity to inform current and future maritime capability acquisition, and to build sovereign Australian capability in the autonomous maritime platform domain. It will pave the way for further work to achieve sustained and sustainable optimal crewing, to improve safety of Australian Defence Force (ADF) personnel and expose the Naval workforce to these technologies and other elements of the Navy RAS-AI Strategy 2040 including normalising human-machine teaming.

Mr Tim Speer of Austral will be presenting ‘Australian defence opportunities for un-crewed vessels’ 1700 Tuesday 10 May at the International Maritime Conference. Austal are represented at Indo Pac 22 at stand 2H6.

Existing Project

MCM in a Day Thales project underway with TAS

Project participants Thales are also at Indo Pac 22. Thales is partnering with DST Group, Academia (Flinders UniversityUniversity of SydneyUniversity of Technology Sydney and the Western Sydney University) and Australian SMEs (INENI Realtime and Mission Systems) in the development of new autonomous technologies that promise to revolutionise mine clearance in littoral operations. The Mine Counter-Measures ‘MCM in a Day’ project will design, develop, test and evaluate various teams of micro Autonomous Underwater Vehicle (AUV) swarms and Autonomous Surface/Subsurface Vessels (ASVs) to deliver coordinated, multi-robot autonomous mine clearance technology to support and assist amphibious zone preparation. This new approach has the potential to support a significant operational step-change to the Royal Australian Navy by removing ADF members from harm’s way and accelerating the speed of mission execution. The work is leverages and benefits from Thales existing experience in the field of autonomous Mine-Counter-Measures systems. You can find Thales at stand 3A6.

Generally

There are a range of other TAS Participants present at IndoPac 22.  TAS staff will be present generally and specifically at the iDrogue display 3J24.

TAS are a Defence Cooperative Research Centre funded through the Commonwealth Next Generation Technologies Fund and also receive funding support from the Queensland Government. You can find out more about TAS by attending our Symposium, ‘Accelerating cross-domain autonomy’, 15 June in Brisbane.

Release of COLREGs Operator Guidance Framework

By Rachel Horne, Assurance of Autonomy Activity Lead, TAS

TAS and Frazer-Nash Consultancy have developed a COLREGs Operator Guidance Framework to make it easier to understand and comply with International Regulations for Preventing Collisions at Sea (COLREGs) when operating autonomous and remotely operated vessels. This Framework is available for standalone use, or as an annex to the new Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous and Remotely Operated Vessels.

The COLREGs Operator Guidance Framework translates the stated and unstated capabilities described, and the terminology used, in COLREGs into a vocabulary and format that makes sense for autonomous and remotely operated vessels. It is intended to be an enabling framework to:

  • Help vessel designers understand what capabilities COLREGs requires vessels to have;
  • Help operators understand what capabilities COLREGs requires and how mission planning can mitigate or remove the need for solving some of the more complex elements of COLREGs; and
  • Help regulators apply a consistent methodology for assessing the capability of a vessel with regards to COLREGs.

The COLREGs framework

Download the COLREGs Operator Guidance Framework on the TAS website

The intent is that the information gathered using the COLREGs Operator Guidance Framework will be used to inform regulatory approval processes and operational planning.

The COLREGs Operator Guidance Framework is currently presented as a PDF, which is best used printed in A3. It is also being converted to a digital tool in a collaboration between TAS, Frazer-Nash, and Aginic over the coming months.

Using the COLREGs Operator Guidance Framework

The recommended use of the COLREGs Operator Guidance Framework for an operator with a specific vessel and proposed operation in mind is as follows:

  1. Download and print out the COLREGs Operator Guidance Framework in A3 colour
  2. Download and fill out the Design Record Template, to ensure you have documented the capabilities of your vessel
  3. With your vessel particulars and the details of your proposed operation in mind, review the framework document, reading from left to right, and identify:
    1. When each rule in COLREGs applies (i.e. some only apply in specific contexts like when in Narrow Channels)
    2. The capabilities required to comply with each specific rule, broken down into the categories of Sense and Perceive, Decide, and Act (noting that these could be in the vessel, the control centre, or a combination)
    3. Mission constraints that could be implemented if you don’t have the capabilities to comply with a specific rule, to remain in compliance (for example, if you don’t have the capabilities needed to comply with Rule 9 – Narrow Channels, you may plan to avoid narrow channels, and therefore remain in compliance with COLREGs)
    4. The suitable method of compliance for each rule (for example, for Rule 5 – Lookout, the proposed evidence of compliance is Design Checklist and Simulation)
  4. Review your analysis, and prepare for own records a list of applicable rules for your vessel and proposed operation, corresponding required capabilities, any operational limitations that need to be imposed, and the recommended evidence type. You may then wish to provide your filled out Design Record and your analysis against the COLREGs Operator Guidance Framework to AMSA to support your application for exemption and/or certification. You can also review it when conducting operational planning to ensure you remain COLREGs compliant.

Further guidance materials, examples, and an instructional video will be released to support the use of the COLREGs Operator Guidance Framework in the coming months.

Further information

Background information on the project to develop the COLREGs Operator Guidance Framework, including the process the team used, is made available in a Briefing Note prepared by Frazer-Nash.

Next steps

TAS will be working with Frazer-Nash and Aginic to develop a digital version of the COLREGs Operator Guidance Framework. TAS intends to make this digital version available through RAS-GATEWAY, a new online portal for assurance and accreditation information and support for autonomous and remotely operated vessels.

The TAS RAS-Gateway project is creating a digital hub to support the Australian autonomous systems sector, including operators and the testing and evaluation ecosystem. The Gateway will feature new methods, policies, practices, and expertise to support accreditation. It aims to address issues currently experienced by regulators, insurers, and technology developers by, for instance, filling gaps in standards and producing consistent (yet flexible) parameters for safe and trusted operations and improved agility to meet fast-changing technical and social licence needs.

In parallel with this digital development, the COLREGs Operator Guidance Framework will be tested through a trial at the Reefworks testing range in Townsville later in 2022.

TAS welcomes feedback on the COLREGs Operator Guidance Framework to info@tasdcrc.com.au

 

TAS would like to thank all parties who contributed to the development of the COLREGs Operator Guidance Framework, including particularly Rob Dickie, Marceline Overduin and Andrejs Jaudzems of Frazer-Nash Consultancy for their smarts and creativity in identifying the best way to turn an idea into a tangible enabling framework, and then doing the hard analytical, excel, and design work to make it happen.

This project received funding support from the Queensland Government through Trusted Autonomous Systems (TAS), a Defence Cooperative Research Centre funded through the Commonwealth Next Generation Technologies Fund and the Queensland Government.

 

 

Results of public consultation on the draft Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous & Remotely Operated Vessels

By Rachel Horne, Assurance of Autonomy Activity Lead, TAS 

Public consultation occurred on the draft Australian Code of Practice for the Design, Construction, Survey and Operation of Autonomous & Remotely Operated Vessels, (‘Australian Code of Practice’) between 15 November 2021 and 15 December 2021.

Background information on the development of the Australian Code of Practice and the public consultation process is available on the TAS website.

TAS received seven written submissions from a diverse range of stakeholders, include SMEs developing vessels, government departments and Recognised Organisations. TAS thanks all stakeholders for taking the time to review the draft Code and make submissions.

The submissions received were considered, and further advice was sought from third parties assisting with the project where needed, to determine where changes were required to the Code. Examples of the changes made to the Code post-consultation include:

  • the accuracy of sensors is now required to be determined and declared, and their performance is required to be monitored. This will help to ensure that vessels do not operate in conditions where the sensors are not sufficiently effective, or when sensors cease to be sufficiently effective;
  • the control system must now be able to be disabled and isolated to allow for inspection and maintenance activities;
  • for survey-exempt vessels and vessels in survey, the risk assessment of any novel system must now be reviewed by an accredited marine surveyor or Recognised Organisation. A note has been added which provides that review by a competent person may be sufficient for a survey-exempt vessels where the vessel, due to its size, speed and shape, poses a very low risk to the safety of persons and other vessels should a failure occur;
  • for survey-exempt vessels and vessels in survey, tests or trials must now be witnessed by an accredited marine surveyor or Recognised Organisation. A note has been added which provides that a competent person witnesses the tests or trials may be sufficient for a survey-exempt vessels where the vessel, due to its size, speed and shape, poses a very low risk to the safety of persons and other vessels should a failure occur; and
  • improved alignment of the Code with the AMSA Guidance Notice – Small unmanned autonomous vessels, including changing the guidance on the operational speed permitted for survey-exempt vessels from 12 knots to 10 knots.

A Consultation Feedback Report was prepared which summarises the consultation undertaken, the responses received, and the outcomes.

The Consultation Feedback Report is available to download.

Once the necessary changes were made to the Code, the updated draft was provided back to AMSA for further review, before confirming it was ready to be finalised as Edition 1.

TAS welcomes ongoing feedback from users of the Code, which will support further future iterations and improvements.

For further information please contact us at info@tasdcrc.com.au.

 

 

Advance Queensland funds new Trusted Autonomous Systems Projects

Queensland’s robotics, artificial intelligence and autonomous systems sector has been boosted thanks to significant funding from the state government.

The Advance Queensland – Trusted Autonomous Systems (TAS) grant has been awarded to three game-changing and researched-backed projects.

These industry-led projects will increase the state’s capacity to build robotics, autonomous systems and artificial intelligence hardware and software.

Two of the projects involve the preservation of cultural art and language in Indigenous communities in north Queensland.

TAS CEO Professor Jason Scholz said its ongoing partnership with the state government highlighted both organisations’ continued leadership in drone and AI technology to grow small businesses in regional areas.

Professor Scholz said, “TAS and Advance Queensland is investing in RAS-AI projects with industry, non-profit organisations and universities to develop data and AI project methodologies for secure and trusted AI.”

TAS Chief Scientist Dr Kate Devitt said, “The technologies and methods developed, as well as the AI governance mechanisms applied, would place the state at the forefront of international best practice.”

The first project, led by KJR with the Western Yalanji Aboriginal Corporation, will work on human-machine teaming to identify and protect high-value cultural assets. Other partners on the project are Athena AI, Emesent, Flyfreely, MaxusAI, the Australian National University, University of Queensland, and Griffith University.

The second project is a collaboration between Revolution Aerospace and the Queensland University of Technology. The team will work on a low-cost cognitive electronic system hosted on an Uncrewed Aerial Vehicle (UAV).

The third project from Pama Language Centre and the University of Queensland, will focus on developing AI and automation in language technologies, with speech communities and providing training.

The grants were awarded after an extensive competitive process. Each will run for two years, until December 2023.

Project details

Human-machine teaming to identify and protect high value cultural assets

KJR and partners (Western Yalanji Aboriginal Corporation, Athena AI, Emesent, Flyfreely, MaxusAI, World of Drones Education Pty Ltd, and Griffith University) will develop a secure multi-platform human-machine teaming capability in Queensland through using semi-autonomous drones for data capture and machine learning for image classification to identify and protect Western Yalanji rock art.

Cognitive Payloads for Small UAV 

Revolution Aerospace and Queensland University of Technology are teaming with Queensland University of Technology (QUT) to build a low-cost cognitive electronic system hosted on an Uncrewed Aerial Vehicle (UAV).

AI and automation in language technologies: securing Queensland’s data sovereignty.

Pama Language Centre (PLC) and Janet Wiles, Ben Foley, and Ben Matthews at UQ will collaborate on a series of projects with speech communities. They will be designing, developing and evaluating new language technologies including a digital asset manager for language resources, tools to support sharing of augmented reality assets, and workshops to build capacity and resource creation. It will also extend the ARC Centre of Excellence for the Dynamics of Language (CoEDL) Transcription Acceleration Project (TAP) application of ‘transfer learning’ for speech recognition, aimed at reducing the training data and time needed to develop novel speech recognition systems. An example of an existing PLC project utilising Augmented Reality is available here.

Building QLD capability

The industry-led projects will build robotics, autonomous systems and artificial intelligence hardware and software. They will achieve:

  • Transdisciplinary research impact and sovereign capability in smart, resilient, and deployable systems in congested and variable data environments.
  • Next generation AI/Machine Learning (ML) methods and apply research into deployable systems
  • New models of data governance, data sovereignty, and assurance of ML pipelines
  • Technology integration to build trusted autonomy
  • Best practice ethical AI through participatory design
  • Digital regulatory approvals

Outputs from the projects will include:

  • AI Integration, augmented reality, advanced signal processing, AI classifier acceleration, AI language technologies, AI to aid in preserving cultural assets, and education materials/programs to train regional workers in use of next gen technologies
  • AI for Sovereign capability, Defence and regional Queensland communities
  • Technologies, frameworks, and methodologies developed in these projects are applicable to Defence AI and autonomy requirements as per the 2020 Strategic Update and Defence Data Strategy 2021-2023.

Inferring meaning using synthetic language AI

By Dr Andrew D Back, TAS AQ Fellow

How can AI understand what I really mean?

Future AI will need more human-like capabilities such as being able to infer intent and meaning by understanding behaviors and messages in the real world. This will include being able to detect when someone is saying one thing but meaning another, when spoken messages are saying one thing but our observations make us question what is really happening. We need to first come to understand what we mean by meaning.

 

Fig. 1 In a world where communications is so prevalent, how do we understand the true meaning and importance of what is being said?

Meaning is often thought of in absolute terms, yet from human experience, we know that there are many levels of meaning. For example, a person says “I am going to eat an orange” or “yes I agree”, the meaning is clear at the surface level, but we can also infer additional meaning with background knowledge. In the first case, the person might be inferring something completely different than the mere act of eating a piece of fruit, perhaps they are recovering from serious illness, so this is conveying that they are recovering their health and independence. In the second case, how do we really know whether there is agreement?

In general, the aims of discovering meaning in AI has often been in the context of natural language processing. For example, Alexa hears a command and then acts on the precise meaning of the words “turn on the light”. But there is much more to meaning that simple translation of commands.

A problem for AI is being able to move beyond simple interpretation of inputs to assessing inputs within context. A picture might look like it is of a penguin, or the person might have said one thing, but is that really the case? How can AI assess multiple layers of meaning? A direct translation might seem very simple, but can readily become caught up in the semantics of human language. We know that meaning is dependent on issues such as context, attention, background knowledge and expectations.

Many would be familiar with the children’s game where you are shown a number of different items and then later asked to remember what you have seen. It’s difficult to recall beyond a small number. As humans we can read a situation very well in many cases but then miss it completely when there is too much data to comprehend.

In modern life and particularly in areas of security and defence where there can be a wide range of data with complex socio-political systems, with too much for humans to quickly and fully comprehend [1], it is imperative to make assessments or risk significant consequences [2].

So how can AI be equipped to understand true meaning in complex social, geo-political or socio-political situations [3]? We can comprehend what may be important without necessarily knowing the full picture, but how do we translate these ideas to AI? How do we build AI systems that can infer meaning? Is it possible to define meaning in a mathematical framework so that it can be implemented reliably in an AI system?

The problem that hasn’t yet been solved

Despite incredible successes, a problem with AI systems since 1970s and even to the present is that they are brittle, which means that they have difficulty in adapting to new scenarios and displaying more intuitive, general intelligence properties [4,5]. Another significant problem is that they are opaque, which means that when they do something, we often do not know why or how they arrived at their decision. In addition, current AI systems require enormous amounts of data and computation, yet we envisage that there is a need for AI to make robust decisions quickly and with scant data in much the same way that humans are often called to do.

The reason for these problems in AI is essentially two-fold: present AI systems tend to focus on the infinite richness of human expression which leads to increasingly more complex networks and vast data requirements. As this is done to achieve results, it becomes increasingly difficult to understand what the AI system has learned [6].

Regular machine learning can have difficulties when it doesn’t have a sense of priorities, wider contextual importance of some events over others, and of overall connectedness where the pieces make up a “story” of thematic information. In essence, machine learning systems have become very good at recognizing images, but without knowing what those images mean or if they can be trusted.

These problems are compounded when trying to understand the relationships between events, their distance, connectedness and behaviors, yet this can be difficult and completely overlooked unless it is precisely trained for. But how can we train to look out for something we haven’t seen? Hence the problem for AI is to understand that meaning is inextricably linked to inter-related behaviors, not just learned features.

A new AI paradigm

To address this, we have proposed a new concept in AI called synthetic language, which has the potential to overcome these limitations. In this new approach, almost every dynamic system of behavioral interaction can be viewed as having a form of language [9-11]. We are not referring necessarily to human interaction, or even human language through natural language processing, but a far broader understanding of language, including novel synthesized languages which describe the behaviors of particular systems. Such languages might define the characteristics of artificial or natural systems using a linguistic framework in ways which have not been considered before.

Synthetic language is based on the idea that there can exist ‘conversations’ or dialogs where the alphabet size may be very small, perhaps only 5-10 symbols and yet still provide a rich framework in terms of vocabulary and topics where we can identify situations very rapidly, without the long training times typically associated with machine learning.

We suggest that in the context of synthetic language, a powerful construct is the idea of relative meaning, not necessarily at the human language level. For instance, we would like to understand the meaning of behavioral interactions at a socio-political level, the synthetic language approach will effectively enable us to learn about the changes taking place in a relative sense which is meaningful in itself. Even knowing when something unusual has occurred is a potentially important task.

Autonomously recognizing when something ‘looks different’ without training on specific examples when humans cannot comprehend quickly enough is likely to be particularly advantageous in defence applications [7]. A key advantage of the synthetic language AI approach is that it has the potential to be used autonomously to uncover hidden meaning in the form of tell-tale signs and true intent that humans might otherwise miss [8].

Foreign relations: how can synthetic language AI help?

As an example of the types of problems we are interested in, suppose we would like to understand the true intent and meaning of states or state actors. In addition to diplomatic statements, there is a potential over-abundance of data coming from a wide range of sources such as military data, observational data, news sources, social media, operational data, intelligence reports, climate information and historical data [2].

We could apply synthetic language AI to consider behavioral interactions at a socio-political level, which form a meta-dialog or story. The hidden messaging elements of diplomatic statements, subtle behaviors of military forces such as relatively small changes of aircraft flight paths, and even hostile acts such as cyber-attacks form a synthetic language which can be compared against each other to determine areas of significance.

By comparing visible and invisible behavioral dialogs across multiple sources, synthetic language gives the possibility of assessing whether there are mixed messages, such as rhetorical bluster versus a potential hidden intent and in so doing, make assessments about potential threats (Fig. 2). If there is a consistency or even leading indicators of subtle events detected in say, operational behavior, this might add weight to diplomatic statements.

On the other hand, if diplomatic statements appear similar to their usual behavioral characteristics then there may be less cause for alarm, despite the particular rhetoric. Clearly this approach can be extended to a wider range of sources, with the aim of detecting anomalies, where there is a departure from the usual behaviors, whether more or less calm for example.

Adopting this new synthetic language approach provides the scope for understanding hidden messages and gaining meaningful insights into behaviors of complex systems without necessarily knowing what we ought to look for. Detection of socio-political anomalies in this way can be further processed at next level of AI and escalated as required to human analyses in an explainable way for further consideration.

 

Fig. 2 Synthetic language provides the potential capability of detecting tell-tale signatures of unusual behaviors. For example, comparing the hidden synthetic language of official diplomatic statements against operational synthetic language and hostile synthetic language dialogs, we can potentially infer the deeper intent, such as rhetoric or otherwise. The red curve represents some hidden synthetic language which might appear in operational events such as flight paths, while the blue curve represents some visible synthetic language which might add weight to diplomatic statements.

References

  1. Gregory C. Allen, Understanding China’s AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security, Center for a New American Security, pp. 3-4, 2019.
  2. Thomas K Adams, ‘Future Warfare and the Decline of Human Decision making,’ The US Army War College Quarterly: Parameters 41, No. 4, p. 2., 2011.
  3. Malcolm Davis, ‘The Role of Autonomous Systems in Australia’s Defence,’ After Covid-19: Australia and the world rebuild (Volume 1), pp. 106-09, 2020.
  4. James Lighthill, “Artificial Intelligence: A General Survey”, in J. Lighthill, N.S. Sutherland, R.M Needham, H.C. Longuet-Higgins and D. Michie (editors), Artificial Intelligence: a Paper Symposium, Science Research Council, London, 1973.
  5. Douglas Heaven, “Why deep-learning AIs are so easy to fool”, Nature, pp. 163-166, 574 (7777), Oct, 2019.
  6. Greg Allen, Understanding AI technology, Joint Artificial Intelligence Center (JAIC) The Pentagon United States, 4, 2020.
  7. Glenn Moy et al., Recent Advances in Artificial Intelligence and their Impact on Defence, 25 (Canberra 2020).
  8. Malcolm Gladwell, Talking to Strangers: What We Should Know about the People We Don’t Know, Little, Brown & Co, 2019.
  9. A. D. Back and J. Wiles, “Entropy estimation using a linguistic Zipf-Mandelbrot-Li model for natural sequences”, Entropy, Vol. 23, No. 9, 2021.
  10. A. D. Back, D. Angus and J. Wiles, “Transitive entropy — a rank ordered approach for natural sequences”, IEEE Journal of Selected Topics in Signal Processing, Vol. 14, No. 2, pp. 312-321, 2020.
  11. A. D. Back, D. Angus and J. Wiles, “Determining the number of samples required to estimate entropy in natural sequences”, IEEE Trans. on Information Theory”, Vol. 65, No. 7, pp. 4345-4352, July 2019.
  12. A.D. Back and J. Wiles, “An Information Theoretic Approach to Symbolic Learning in Synthetic Languages”, Entropy 2022, 24, 2, 259.

Ethics Uplift Program – Workshop on the ethics of autonomous systems

By Tara Roberson, TAS Trust Activities Coordinator & Kate Devitt, TAS Chief Scientist

A busy 2021 for TAS Ethics Uplift!

The Trusted Autonomous Systems (TAS) Ethics Uplift Program (EUP) supports theoretical and practical ethical AI research and provides advisory services for industry to enhance capacity for building ethical and deployable trusted autonomous systems for Australia’s Defence.

In 2021, the EUP celebrated the release of technical report A Method for Ethical AI in Defence by the Australian Department of Defence, co-authored by TAS, RAAF Plan Jericho, and Defence Science & Technology Group (DSTG).

EUP conducted Article 36 review and international humanitarian law workshops and produced a series of videos on the ethics of robotics, autonomous systems, and artificial intelligence for the Centre for Defence Leadership and Ethics (CDLE) Australian Defence College.

We appointed two new Ethics Uplift Research Fellows – Dr Christine Boshuijzen-van Burken and Dr Zena Assaad and produced ‘A Responsible AI Policy Framework for Defence: Primer’.

2022 promises to be an even busier year with several projects underway, including a new suite of ten videos for CDLE.

 

Workshop in November 2021

In November 2021, Trusted Autonomous Systems (TAS) co-hosted their annual workshops on the ethics and law of autonomous systems with the UQ Law and the Future of War Research Group.

The ethics component of these workshops brought together ethics researchers, consultants, Defence, Government, and industry to discuss current best practise approaches to ethics for robotics, autonomous systems, and artificial intelligence (RAS-AI) in Defence. It also showcased recent work from the TAS Ethics Uplift program.

The workshop aimed to increase awareness of Australian artificial intelligence governance frameworks in civilian and military applications; share practical tips and insights on how to ensure ethical AI in Defence projects; and connect our ethics, legal, and safety experts.

We have summarised the key points from the workshop presentations. These summaries have also been visualised by sketch artist Rachel Dight in the image above.

 

Australia’s approach to AI governance in security and Defence

TAS Chief Scientist Dr Kate Devitt opened the workshop with a presentation on the Australian approach to AI governance in Defence, based on a forthcoming chapter in AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge.

Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. The presentation summarised the various means used by Australian to govern AI in security and Defence. These include:

 

Ethical AI in Defence Case Study: Allied Impact

Christopher Shanahan (DSTG) presented on research led by Dianna Gaetjens (DSTG) conducting a case study of ethical considerations of the Allied IMpact Command and Control (C2) system using the framework developed in A Method for Ethical AI in Defence.

The research found that way AI is developed, used, and governed will be critically important to its acceptance, use and trust.

The presentation recommended that Defence develop an accountability framework to enhance understanding over how accountability for decisions made with AI will be reviewed.

There also needs to be significant education and training for personnel who will use AI technology and we need a comprehensive set of policies to guide the collection, transformation, storage, and use of data within Defence.

 

How to Prepare a Legal and Ethical Assurance Project Plan for Emerging Technologies Anticipating an Article 36 Weapons Review

Damian Copeland (International Weapons Review) presented on preparing a Legal and Ethical Assurance Project Plan (LEAPP) for AI projects in anticipation of Article 36 reviews.

A LEAPP is an evidence-based, legal and ethical risk identification and mitigation process for AI projects about a certain threshold (i.e. AI projects that are subject to an Article 36 weapons review) recommended in A Method for Ethical AI in Defence.

The LEAPP designs a contractor’s plan for ensuring the AI meets Commonwealth requirements for legal and ethical assurance.

LEAPP review should occur at all stages of a product’s life cycle, including during strategy and conception, risk mitigation and requirement setting, acquisition, and in-service use and disposal.

A LEAPP could directly contribute to Defence’s understanding of legal and ethical risks associated with particular AI capabilities.

 

Ethical Design of Trusted Autonomous Systems in Defence

Dr Christine Boshuijzen-van Burken – a new TAS Ethics Uplift Research Fellow – presented on her new project on ethical design of trusted autonomous systems in Defence.

An important part of the project will be devoted to answering the following question: What values do the Australia public prioritise when it comes to the design of AI for Defence? To achieve this, close attention should be paid to the needs, values, and concerns of diverse stakeholders. These stakeholders include: developers, humanitarian organisations, Defence, and the Australian public.

This research aims to build an ethical framework based on value sensitive design which will help developers of autonomous systems in Defence to think about the ethical aspects of their technologies.

It will attempt to answer the question: What values do the Australia public prioritise when it comes to the design of AI for Defence?

This project will also produce case studies on the use of the ethical framework.

 

Human-Machine Team (HUM-T) Safety Framework for cross-domain networked autonomous systems in littoral environments

Dr Zena Assaad – a new TAS Ethics Uplift Research Fellow – presented on her new project on the complex problem of human-machine teaming with robotics, autonomous systems, and AI.

Human-machine teaming broadly encompasses the use of autonomous or robotic systems with military teams to achieve outcomes that neither could deliver independently of the other.

This concept describes shared intent and shared pursuit of an action and outcome.

This research aims to develop a safety framework for human-machine teaming operations to enable safe, effective, and practical operation for Defence. Safety, here, includes physical safety and psychosocial safety.

 

Ethical and Legal Questions for the Future Regulation of Spectrum

Chris Hanna – a lawyer in the ACT, previous Legal Officer for Defence Legal, and TAS Ethics Uplift consultant – presented on the ethical and legal questions facing the future regulation of spectrum in Australia.

Autonomous systems depend on electromagnetic spectrum access to ensure efficient, effective, ethical, and lawful communication between humans and machines and between teams. Systems that manage spectrum provide command support and sustained communications, cyber, and electromagnetic activities. Because spectrum is a fixed resource, increasing demand requires efficient, effective, ethical, and lawful management that is considerate of privacy and other rights to communicate.

As part of his work with the TAS Ethics Uplift Program, Chris will contribute to a TAS submission for Department of Home Affairs Reform of Australia’s electronic surveillance framework discussion paper in 2022.

 

The Ethics of the Electromagnetic Spectrum in Military Contexts

Kathryn Brimblecombe-Fox – artist, PhD candidate at Curtin University, and Honorary Research Fellow at UQ – presented on her recent work on the military use of the electromagnetic spectrum [EMS].

Militaries around the world are increasingly interested in the EMS as an enabler of technology, a type of fires, a ‘manoeuvre’ space, and a domain. Kathryn suggests that the contemporary theatre of war is now staged beyond geographies of land, air, and sea. It is now staged in space, cyberspace, and the electromagnetic spectrum – all domains permeable to each other to allow interoperability and joint force ops.

The Best Robot is the One that Gets Destroyed