A Method for Ethical AI in Defence

Today the Australian Department of Defence released ‘A Method for Ethical AI in Defence’ an outcome of a workshop in 2019 attended by over 100 representatives from Defence, other Australian government agencies, industry, academia, international organisations and media. The workshop was facilitated by Defence Science & Technology Group, RAAF Plan Jericho and Trusted Autonomous Systems Defence Cooperative Research Centre. Defence note that the report outlines a pragmatic ethical methodology for communication between software engineers, integrators and operators during the development and operation of Artificial Intelligence (AI) projects in Defence.

Trusted Autonomous Systems CEO Professor Jason Scholz said ”Trusted Autonomous Systems are very pleased to partner with Defence on this critical issue of ethics in AI. Ethics is a fundamental consideration across the game-changing Projects that TAS are bringing together with Defence, Industry and Research Institutions.”

AI and human machine teaming will be a key capability in the future of Australian Defence systems. Chief Defence Scientist Tanya Monro notes “…AI technologies offer many benefits such as saving lives by removing humans from high-threat environments and improving Australian advantage by providing more in-depth and faster situational awareness”.

Air Vice-Marshal Cath Roberts, Head of Air Force Capability said “artificial intelligence and human-machine teaming will play a pivotal role for air and space power into the future… We need to ensure that ethical, moral and legal issues are resolved at the same pace as the technology is developed. This paper is useful in suggesting consideration of ethical issues that may arise to ensure responsibility for AI systems within traceable systems of control”. These comments are equally important to the other service arms.

In 2019, the Trusted Autonomous Systems Defence CRC (TASDCRC) commenced a six-year Programme on the Ethics and Law of Trusted Autonomous Systems valued at $9M. Over the past two years the activity has conducted workshops, engagements and consultation with participants and stakeholders of the Centre, contributing to ADF strategy, producing diverse publications and influencing the design of trusted autonomous systems such as the game-changing Athena AI ethical and legal decision support system.

From 2021 the Ethics Uplift Program (EUP) of the TASDCRC will offer ongoing assistance to Centre participants through consultation, advice and policy development, supported by case analysis, education and enculturation

The Trusted Autonomous Systems affiliate researchers and employees participate in a wide range of events in consideration of the ethics and law of RAS-AI such as  ICRC, UNIDIR SIPRI, and NATO.

TASDCRC is a non-government participant in the United Nations (UN) Group of Governmental Experts (GGE) on Lethal Autonomous Systems (LAWS) to ensure the development of autonomous systems accord with ethical principles, the laws of armed conflict (LOAC) and in abidance with Article 36 weapons reviews.

The Defence Media Release reinforced that “The ethics of AI and autonomous systems is an ongoing priority and Defence is committed to developing, communicating, applying and evolving ethical AI frameworks”. Trusted Autonomous Systems are a partner to Defence on that journey. More details at https://www.dst.defence.gov.au/publication/method-ethical-ai-defence

Book release – Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare

New Oxford University Press volume released: Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare 

The question of whether new rules or regulations are required to govern, restrict, or even prohibit the use of autonomous weapon systems has been the subject of debate for the better part of a decade. Despite the claims of advocacy groups, the way ahead remains unclear since the international community has yet to agree on a specific definition of Lethal Autonomous Weapon Systems and the great powers have largely refused to support an effective ban.  Lethal Autonomous Weapons focuses on exploring the moral and legal issues associated with the design, development and deployment of lethal autonomous weapons.  

The book features chapters by current and former TAS collaborators including CEO Prof. Jason Scholz, Chief Scientist Kate Devitt, Prof Rain Liivoja, Dr Tim McFarland, Dr Jai Galliott and Dr Bianca Baggiarini. 

Available hard and soft copies, more details at publisher site: 

https://global.oup.com/academic/product/lethal-autonomous-weapons-9780197546048?cc=au&lang=en& 

Also available in soft copy on a number of platforms.  

TASDCRC Activity on Ethics and Law of Trusted Autonomous Systems

Human machine teaming with Robotic, Autonomous Systems and Artificial Intelligence (RAS-AI) will be a key capability in the future of Australian Defence systems. RAS-AI may increase safety for personnel, removing them from high-threat environments; increase the fidelity and speed of human awareness and decision-making; and reduce the cost and risk to manned platforms. This RAS-AI investment must be informed by ethical and legal considerations and constraints.

Figure 1. Engineers Australia has awarded Athena AI as an engineering breakthrough that has the ability to identify protected objects, people and symbols, such as hospitals, in near real time for military operations using computer vision at very high probabilities. A funded TASDCRC project led by Cyborg Dynamics Engineering and Skyborne Technologies

In 2019, the Trusted Autonomous Systems Defence CRC (TASDCRC) commenced a six-year Programme on the Ethics and Law of Trusted Autonomous Systems valued at $9M. Over the past two years the activity has conducted workshops, engagements and consultation with participants and stakeholders of the Centre, contributing to ADF strategy, producing diverse publications and influencing the design of trusted autonomous systems such as the game-changing Athena AI ethical and legal decision support system—see Figure 1.

The Trusted Autonomous Systems affiliate researchers and employees participate in a wide range of events in consideration of the ethics and law of RAS-AI such as  ICRC, UNIDIR SIPRI, and NATO.

TASDCRC is a non-government participant in the United Nations (UN) Group of Governmental Experts (GGE) on Lethal Autonomous Systems (LAWS) to ensure the development of autonomous systems accord with ethical principles, the laws of armed conflict (LOAC) and in abidance with Article 36 weapons reviews.

Law and the Future of War, University of Queensland

The University of Queensland Law and the Future of War Research Group continues to lead research to develop and promote a better understanding of international law that governs the use of trusted autonomous systems (TAS) by the Australian Defence Organisation. It further aims to contribute to the development of law, policy and doctrine to ensure that Australia’s reliance on trusted autonomous systems satisfies both humanitarian imperatives and national security interests and is consistent with Australia’s commitment to upholding international law.

Ethics Uplift Program

From February 2021, the TASDCRC Ethics Uplift Program (EUP) will offer immediate and ongoing assistance to Centre participants through consultation, advice and policy development, supported by case analysis, education and enculturation.

The objectives of the program are to:

  • Raise the level of knowledge, skills and application of ethics.
  • Build enduring ethical capacity in Australian industry and universities to service Australian RAS-AI.
  • Educate in how to build ethical and legal autonomous systems.
  • Achieve ethical RAS-AI for TASDCRC Projects and
  • Support and contribute to the development of national policy.

The program will provide Australian industry access to the best of Australian theoretical and pragmatic expertise via universities and consultancies grounded in Defence-suitable methodologies and frameworks. The continued investment by TASDCRC with Defence and other participants is intended to accelerate and foster a sustainable capability for ethical and legal sovereign RAS-AI in Australia.

To express interest in provision of services to the EUP, contact Chief Scientist Dr Kate Devitt via info@tasdcrc.com.au

The Trusted Autonomous Systems submission to the Australian AI Action Plan

Trusted Autonomous Systems Directors of Autonomy Accreditation, (L-R) Mark Brady (Land), Rachel Horne (Maritime) and Tom Putland (Air) 

 

The Australian Government recognises that artificial intelligence (AI) will have enormous social and economic benefits for all Australians. The Department of Industry, Science, Energy and Resources (DISER) is consulting on the development of an Artificial Intelligence Action Plan to help maximise the benefits of AI for all Australians and manage the potential challenges. To that end DISER has developed a discussion paper and invited submissions to the AI Action Plan.

AI is the key underpinning of autonomous systems. The Trusted Autonomous Systems Defence CRC (TAS-DCRC) is uniquely placed to provide a submission, noting our depth of experience and leadership in this field, and our multi-disciplinary team including scientists, engineers, ethicists and philosophers, lawyers, and academics. We have a broad focus on many facets of Artificial Intelligence (AI), and our work covers ethical, assurance, technical, and practical perspectives.

Through our common good activities, specifically Activity 2: Assurance of Autonomy, we are actively working to provide a better understanding for Australians about the assurance and accreditation pathways for autonomous systems, in the maritime, land and air domains, and are working with the regulators and key stakeholders to improve those paths.

Our goal is to improve the innovation pipeline, making it faster to design, build, test, assure and certify autonomous systems, while maintaining warranted trust and safety. This work will unlock the many safety, environmental and efficiency benefits autonomous systems can bring, boost Australian jobs, and cement Queensland’s status as the ‘Smart Drone State’ of Australia.

We aim to attract more international participants to Australia to use our country’s world-class large-scale test ranges, and to generate business for the numerous AI entrepreneurs in Queensland and Australia more broadly.

Download the TAS-DCRC submission to the AI Action Plan here

CEO Jason Scholz awarded 2020 McNeil Prize

Trusted Autonomous Systems CEO, Professor Jason Scholz was awarded the 2020 McNeil Prize today by Chief of Navy, Vice Admiral Michael Noonan, AO in a virtual ceremony.

In 2016, the Australian Naval Institute (ANI) created an award to honour an individual or individuals from Australian defence industry who have made an outstanding contribution to the capabilities and sustainment of the Royal Australian Navy (RAN). This award was named the McNeil Prize in honour of Rear Admiral Percival McNeil CB RAN (1883-1951).

The contributions of Prof. Scholz to RAN capability are articulated in the ANI Media Release and the ceremony underscored the importance of this field of research and contribution to future capability. Read the ANI Media Release here.

Congratulations Jason!

Introducing Mark Brady, Director of Autonomy Accreditation – Land

Society is increasingly reliant on autonomous systems and artificial intelligence (RAS-AI). Along with changes to communication, mobility, and technology, RAS-AI in the land domain will bring many changes to the physical landscape and architecture of cities. Despite the physical limitations of current transport infrastructure, carriageway width and lane marking, there will be the capacity to vastly increase vehicular throughput of many longer, narrower and faster vehicles over a given area of roadway without changing the size of the road. Vehicles may not even need windows except for viewing scenery and traffic jams may be a historical memory in the fully integrated smart cities of the future. Private ownership of vehicles themselves may eventually give way to mobility as a service with houses no longer needing space wasted on a ‘bedroom for the car’ and autonomous vehicle service fleet companies becoming the new industrial giants of the 21st Century. Trust will be a crucial factor in the development and sustainability of these systems.

With human-machine interaction there is always a point where the decision-making process of the human can be called to account for their action or inaction. This point is not always as clear-cut for RAS-AI; and the ability of society to examine a decision after the fact is particularly relevant in situations where harm has occurred. Autonomous vehicles have the potential to make life and death decisions in place of human beings. Unlike traditional robotic systems that were basically static, fixed to the ground with a given operational design domain, autonomous land vehicles may be highly mobile, heavy, and capable of inflicting harm throughout their operation or deployment. As the potential for harm rises, so does the need to assure that their operation, and failure, occur in ways that are predictable so that humans may make allowances for such behaviour during their operation.

Accordingly, the ability to accredit the operational domain or domains of autonomous land vehicles is necessary to foster and maintain trust in these systems. Establishing trust in RAS-AI requires these systems to be predictable, explainable, and ultimately, accreditable. Predictability is therefore the first step towards building trust, where an ability to understand the ultimate outcome of an open-ended RAS-AI decision-making process becomes vital. However, prediction may not always be possible, and in such cases, explainability allows society to understand why the RAS-AI followed a particular behavioural pathway. It will also be necessary to accredit the operational capabilities of RAS-AI to foster and maintain trustworthiness. RAS-AI might be accredited within a specific operational domain, as a level of safe operation, or as a combination of other factors. As the body of knowledge surrounding trust in autonomous systems is just beginning to be understood, there is now a significant need to clarify parameters of trust in RAS-AI.

It is with this in mind that the TASDCRC now introduces our third Director of Autonomous Systems Accreditation, Mark Brady. Mark is an expert in the area of regulation for autonomous land vehicles with a focus on establishing a roadmap for assurance and accreditation of autonomous land-based technology. Mark’s research into regulation for disruptive technology focused on autonomous land vehicles as a case study examining the regulatory impact these technologies have on law. Mark brings a wealth of experience as a researcher and academic at the University of Adelaide and as a solicitor working in Queensland. These skills will help Mark to foster cooperation between researchers, regulators and stakeholders to encourage confidence and investment in the development of automated land vehicle technology in Queensland and throughout Australia as it looks to become a world leader in many areas of autonomous technology.

Mark joins Rachel Horne (Maritime), and Tom Putland (Air) to develop a national body of knowledge including methods, policies, and practices to support accreditation. Directors address issues experienced by regulators, insurers, and autonomous technology developers by producing consistent (yet flexible) parameters for safe and trusted operations and improved agility to meet fast-changing technical and social licence needs. Autonomy Accreditation forms a significant part of the Centre’s Assurance of Autonomy Activity that aims to create a trusted environment for test, risk analysis and regulatory certification support of autonomous systems and establish an independent world-class assurance service to global industry based in Queensland.

L-R, Mark, Rachel & Tom

Introducing Tom Putland, Director of Autonomy Accreditation – Air

With the surging use of highly automated remotely piloted aircraft systems (RPAS) and the prospect of ubiquitous drone-based delivery from the likes of Wing, Matternet, Flirtey and others, the question of how to perform air traffic management for drones, to prevent both unmanned-on-unmanned and unmanned-on-manned conflicts is a complicated one.

It’s clear that there are different societal expectations for the safety of two large wide body aircraft with hundreds of fare-paying passengers onboard colliding with one another compared to two small unmanned aircraft colliding with one another. Society may be willing to invest significant cost to ensure two commercial public transport aircraft do not collide, however society would not be willing to expend the same resources to prevent two drones from colliding.

To complicate this further, there are likely to be orders of magnitude more drones than manned aircraft, operating in close proximity, undertaking a range of different operations that may require approval at a moment’s notice. Without the ability to rely upon the human eye onboard to undertake see and avoid functions, this problem lends itself towards an autonomous, system of systems solution.

As the demand for such an Unmanned Aircraft System Traffic Management (UTM) system increases, the highly intertwined technical, legal and societal issues associated with a UTM need to be solved. The regulation and governance related to design, manufacture, certification and the continued operational safety of these autonomous systems requires a collaborative approach from society, regulators, academia and the aviation industry to ensure that trusted, safe, equitable and efficient UTM systems are developed for all parties.

It is with great pleasure that the Centre can announce the appointment of Tom Putland as Director of Autonomy Accreditation – Air, effective Monday 2 November.

Tom has worked at the Civil Aviation Safety Authority (CASA) for the past seven years, five of which were spent in the realm of RPAS focusing on RPAS airworthiness and overarching safety and risk management policy for CASA. Tom has also played a crucial role in the assessment and approval of complex RPAS operations.

Tom has been an Australian representative at the Joint Authorities for Rulemaking on Unmanned Systems (JARUS) for the last three years and has actively contributed to the development of the JARUS Specific Operations Risk Assessment(SORA), a globally recognised risk assessment tool for RPAS operations.

In these times of rapid technology development with respect to RPAS, UTM and automation, Tom is ideally placed to bridge the gap between regulators, the industry, society and academia to create a harmonised body of knowledge to facilitate faster, more efficient and safer certification of autonomous aircraft in Australia and around the world.

Tom becomes our second Director of Autonomy Accreditation, joining  Rachel Horne (Maritime), to develop a national body of knowledge including methods, policies, and practices to support accreditation. Directors address issues experienced by regulators, insurers, and autonomous technology developers by producing consistent (yet flexible) parameters for safe and trusted operations and improved agility to meet fast-changing technical and social licence needs.

Autonomy Accreditation forms a significant part of the Centre’s Assurance of Autonomy Activity that aims to create a trusted environment for test, risk analysis and regulatory certification support of autonomous systems and establish an independent world- class assurance service to global industry based in Queensland.

Are Autonomous Weapons Systems Prohibited?

Authored by: Rain Liivoja, Eve Massingham, Tim McFarland and Simon McKenzie, University of Queensland

The incorporation of autonomous functions in weapon systems has generated a robust ethical and legal debate. In this short piece, we outline the international law framework that applies to the use of autonomous weapon systems (AWS) to deliver force in an armed conflict. We explain some of the reasons why using an AWS to deliver force is legally controversial and set out some of the ways in which international law constrains the use of autonomy in weapons. Importantly, it explains why users of AWS are legally required to be reasonably confident about how they will operate before deploying them. We draw on the work that the University of Queensland’s Law and the Future of War Research Group is doing to examine how the law of armed conflict (LOAC), and international law more generally, regulates the use of autonomous systems by militaries.

AWS are not prohibited as such

According to a widely used United States Department of Defense definition, an AWS is ‘a system that, once activated, can select and engage targets without further intervention by a human operator’. International law does not specifically prohibit such AWS. Some AWS might be captured by the ban on anti-personnel land mines or the limitation on the use of booby-traps, but there exists no comprehensive international law rule outlawing AWS as a class of weapons.

There are those who argue, for a variety of reasons, that an outright ban ought to be placed on AWS. We take no position in relation to that argument. We note, instead, that unless and until such time as a ban is put in place, the legality of AWS under international law depends on their compatibility with general LOAC rules, especially those that deal with weaponry and targeting.

AWS are not necessarily inherently unlawful under LOAC

Independently of any weapon-specific prohibitions (such as the ban on anti-personnel land mines), LOAC prohibits the use of three types of weapons:
• weapons of a nature to cause superfluous injury or unnecessary suffering (Additional Protocol I to the Geneva Conventions (API) article 35(3); Customary International Humanitarian Law (CIHL) Study rule 70);
• weapons of a nature to strike military objectives and civilians or civilian objects without distinction (API article 51(4); CIHL Study rule 71);
• weapons intended or expected to cause widespread, long-term and severe damage to the natural environment (API article 35(3); see, eg, CIHL Study rules 54 and 76).

Weapons falling into any of these categories are often described as being ‘inherently unlawful’. LOAC requires States to make an assessment about whether a new weapon would be inherently unlawful (API article 36). The relevant test scenario is the normal intended use of the weapon. This means that being able to envisage a far-fetched scenario in which superfluous injury or indiscriminate effects could be avoided does not make the weapon lawful. Conversely, the possibility of a weapon causing superfluous injury or indiscriminate effects under some exceptional and unintended circumstances does not make the weapon inherently unlawful.
A specific AWS, just like any other weapon, can fall foul of one of these three general prohibitions. Such an AWS would then be inherently unlawful, and its use would be prohibited. But it is impossible to say that all AWS would necessarily be prohibited by one of these three principles. In other words, we cannot conclude that all AWS are inherently unlawful.
We note, at this juncture, the Martens Clause, contained in many LOAC instruments, whereby in the absence of specific treaty rules ‘civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.’ There are considerable disagreements about the interpretation of this clause generally, as well as its significance to weapons. Considering that the general rules concerning weapons (outlined above) have now been codified in the Additional Protocols, it is our view that the Martens Clause has limited relevance to weapons post 1977.

AWS must be used in compliance with LOAC

LOAC imposes obligations to select and deploy weapons so as to limit the effects of armed conflict. These legal obligations are held by individuals and States. This has been expressly acknowledged by the Group of Governmental Experts on Lethal Autonomous Weapons Systems, who have noted that LOAC obligations are not held by AWS. It is up to the individual(s) using an AWS, and the State that has equipped them with the system, to make sure that they are doing so consistently with their legal obligations.
There are three key LOAC principles with which the user of an AWS must comply:
• The principle of distinction requires belligerents to direct their attacks only against lawful military objectives (combatants, members of organised armed groups, other persons taking a direct part in hostilities, and military objects) and not to direct attacks against civilians or civilian objects (API articles 48, 51 and 52; CIHL Study rule 1).
• The principle of proportionality requires that attacks cause no harm to the civilian population (‘collateral damage’) that was foreseeably excessive in comparison to the military advantage anticipated from the attack (API article 57(2)(b), CIHL Study rule 14).
• Those who plan or decide upon attacks must take feasible precautions to verify the lawfulness of targets, to minimise collateral damage, and, where possible, to give advance warnings to affected civilians (API article 57(2); CIHL Study rule 15).

These are obligations which related to attacks. Attacks comprise acts of violence deployed in both offensive and defence capacities (API article 49(1)) which may or may not have lethal effect. Therefore, these three rules are applicable to all attacks – regardless of whether or not they are offensive or defensive and regardless of whether they are ultimately lethal.

There is also an overarching general duty to exercise “constant care” to spare the civilian population, civilians, and civilian objects in all military operations (API article 57(1), CIHL Study rule 15).

These principles seek to balance the often competing interests of military necessity and humanity. They are not easy to apply in practice. In particular, in relation to precautions in attack, there are legitimate questions about how precautions ought to be taken when a weapons system has autonomous capabilities. What is clear, however, is that the obligation to take precautions is an obligation of the user of the AWS, not an obligation of the AWS itself.

The lawfulness of AWS use depends on the expected consequences

Compliance with the three LOAC principles mentioned above is straightforward when simple weapons are used in the context of close combat. The principles of distinction and proportionality do not pose much difficulty when a combatant stabs an enemy with a bayonet or shoots them at a close range.
Warfare has, however, evolved so that many attacks are carried out by means of increasingly sophisticated technology. Thus, in many instances, compliance with distinction and proportionality entails using technology in the expectation that it will lead to outcomes normally associated with the intended use.

Under current law, combatants can rely on complex weapon systems, including AWS, if they can do so consistently with their LOAC obligations. This means a lot rides on just how confident the combatant is that the weapon will behave as expected in the circumstances of use. The potential complexity of AWS raises some thorny technical, ethical and legal questions.

LOAC has never been prescriptive about the level of confidence required, but it is clear an individual need not be absolutely sure using a particular weapon will definitely result in the desired outcome. A ‘reasonable military commander’ standard, articulated in the context of assessing proportionality of collateral damage, would arguably be applicable here. Consider a military commander who deploys a tried and tested artillery round against a verified military objective that is clearly separated from the civilian population. The commander will not be held responsible for an inadvertent misfire that killed civilians. It is also clear that there will be ‘room for argument in close cases’ (Final Report to the Prosecutor by the Committee Established to Review the NATO Bombing Campaign Against the Federal Republic of Yugoslavia, para 50). Indeed, a number of States have clarified that military decisions must only be held to a standard based on the information ‘reasonably available to the [decision-maker] at the relevant time’ (eg. API, Reservation of the United Kingdom of Great Britain and Northern Ireland, 2 July 2002, (c)).

The situation is made more difficult by the challenge of predicting how an autonomous system will behave. Some AWS may be such that a user will never be able to be confident enough about the outcome that its use would achieve in the circumstances of the proposed attack. The use of such a weapon would therefore never be lawful. However, it is difficult to ascertain this without looking at a specific system. It seems clear to us that the lawful use of AWS under LOAC will require a comprehensive approach to testing and evaluating the operation of the system. In addition, the individuals using the weapons will have to be equipped with enough knowledge to understand when the use of an autonomous system will perform consistently with the LOAC obligations of distinction, proportionality and precautions in attack. Put simply, the operator/s has to understand what will happen when they use the system.

This level of understanding is clearly possible. A number of weapons systems with a high level of autonomy are already in use (e.g. the Aegis Combat System used on warships) in environments where there is evidence-based confidence about how they operate. This confidence allows the individual responsible for their deployment to comply with their legal obligations.

Some autonomous systems will be able to be used in a variety of contexts. Where this is the case, the lawfulness of a decision to deploy the system in any one case will depend on an assessment of the both the system’s capabilities as well as the environmental conditions.

The following example illustrates this point.

Scenario

A commander wishes to deploy a land-based AWS that is designed to identify and target vehicles that are armed. The system has been tested and approved by the State pursuant to its Article 36 legal review obligations. The system cannot clearly identify persons. The commander understands the system and its capabilities. The environment will dictate how it can be used – with what precautions, according to what parameters, on what setting.

Environment – Designated geographic zone that is secured and patrolled such that civilians are not within the area. Possible implications for lawful use of the AWS – legally unproblematic and operationally valuable.

Environment – area with heavy military presence. Intelligence suggests farmers in the area routinely carry small arms and light weapons for use against predatory animals. Possible implications for lawful use of the AWS – could be facilitated by specific settings and parameters in order to take into account the environment and context in which it is operating.

Environment – Dense urban setting with high rates to civil unrest and crime. Possible implications for lawful use of the AWS – would entail a significant degree of risk to civilians and would be legally problematic.

The environment and circumstances are key

There is no hard and fast rule that an AWS can be lawfully used in a particular context. The user must evaluate the system to ensure that, in the circumstances in which they seek to deploy it, they will be complying with their LOAC obligations.

This evaluation will come from understanding the weapon and how it operates, understanding the environment in which they wish to deploy it, and understanding the task for which they wish to deploy it. It may be possible to lawfully balance the military advantage that comes from high levels of automation with the risks of unintended consequences in one environment, and yet not in another environment. For example, sea mines, by virtue of the environment of their use, are able to have a high level of automation and still comply with the LOAC principles outlined above. However, if that same level of automation was to be used in a similar weapon on land this may result in greater likelihood of a violation of the LOAC principles.

At its heart, delegation of the use of force to an autonomous system requires those that decide to deploy them understand the consequences of their operation in the environment of use. This means that there will be no clear-cut answers, and the legality of a decision to deploy an AWS will depend on the context.

________________

This blog post is part of the ongoing research of the Law and the Future of War Research Group at the University of Queensland. The Group investigates the diverse ways in which law constrains or enables autonomous functions of military platforms, systems and weapons. The Group receives funding from the Australian Government through the Defence Cooperative Research Centre for Trusted Autonomous Systems and is part of the Ethics and Law of Trusted Autonomous Systems Activity.

The views and opinions expressed in herein are those of the authors, and do not necessarily reflect the views of the Australian Government or any other institution.

For further information contact Associate Professor Rain Liivoja, Dr Eve Massingham, Dr Tim McFarland or Dr Simon McKenzie.

Assuring Automation Smarter

In identifying the key disruptive technologies of the future, drones appear on most major industry assessments as a top disrupter. Around the world, countries are scrambling to lead these developments with Queensland and Australia leading innovators and investors in drones, robotics and autonomous systems. Applications for drones exist in industries such as offshore energy, defence and science for “dull, dirty, dangerous and distant” operations. Independent industry research estimates the global drone market is expected to grow to $100 billion by the end of 2020.

But there is a problem.

The problem is that current assurance and certification processes are not well suited to unmanned systems and can take many months or even years. Coupled with low sales volumes, uncertainty and delays can significantly stifle growth and adoption. The accreditation of robotics and autonomous systems is done on a case by case basis and often by exception rather than streamlined.

To address this challenge, Biarri, Queensland University of Technology (QUT) and the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC), with financial support from Advance Queensland (the TASDCRC and this project receives funding support from the Queensland Government) and domain input from QinetiQ and Australian regulatory authorities, are designing software tools to accelerate and simplify the assurance and certification process for AI enabled and unmanned systems. These types of software tools have recently revolutionised many other industries and so the timing is right to accelerate innovation in the regulation industry via digital transformation.

By enabling faster and more efficient assurance and accreditation processes, our platform will facilitate innovation in a way which will catalyse the next wave of autonomous systems to solve large scale global challenges.

Why Now?

AI and autonomy are introducing new challenges for regulators, insurers, technology developers and system operators – with current methods, policies and practices lagging behind technical advances. Regulatory frameworks in Australia for domains like air and sea represent a mostly static, top-down regulatory compliance model of a public authority regulating under a legislative scheme.

As companies try to certify their new unmanned autonomous systems, facing uncertainty and lengthy delay, it is clear that we require new approaches to assuring land, maritime and air based pilotless vehicles, vessels and aircraft that are sustainable and effective. Owners and operators need to be able to update software and hardware to meet changing operational and regulatory requirements on the fly and not have to resubmit their systems for regulatory reapproval when they do.

The tools that will address this are part of the #RegTech digital transformation, which is based on changing the regulation processes via third generation computational regulatory regimes using digital technologies within digital environments. Such systems already exist in the finance industry, e.g. regulatory sandboxes established for fintech innovators (Allen, 2019; Piri, 2019).

A similar, flexible regulatory environment to what has been done in fintech and the supporting tools will enable the faster assurance and certification of aircrafts, vessels and vehicles driving greater innovation without compromising safety. Although potentially game-changing, building digital and physical sandboxes to work within traditional state-centred regulation is difficult and so it requires the specialised skills – skills which Biarri and QUT, in combination with its project partners, will bring to this endeavour.

What is the goal?

This project is seeking to answer the above challenges by bringing together world leading expertise from a number of partners such as Biarri and QUT’s Faculty of Law as well as experts in the appropriate regulatory environments via the TASDCRC. The goal is to create an integrated system of agile regulatory methods and AI audit tools, in order to simplify and streamline the interaction between the regulator and the regulated. These tools will dramatically shorten the current time it takes to assure and accredit unmanned systems, without compromising trust or safety, therefore providing a high level of value for businesses who need to quickly prototype unmanned systems that incorporate AI core capability.

This two-sided digital platform will also enable the operational parameters and software of unmanned autonomous systems to be rapidly adapted in response to changes in regulatory, legal, ethical and societal needs. The developed software will accelerate innovation and positive commercial outcomes by lowering the barrier to deployment for companies developing unmanned autonomous systems – without compromising trust and safety.

 

Anticipated Outcomes and Benefits

We expect there to be a number of benefits from the new digital platform. They include:
● World leading capability to deliver faster product development lifecycles for companies to safely test AI enabled unmanned autonomous systems.
● A world-first two-sided digital tool that integrates regulators with the regulated to increase the speed for organisations to reach a safe and trusted deployable state.
● A digital toolkit to allow the auditing and tracking of autonomous and AI based developments and simplification of reporting requirements for companies.

With these benefits Australia has the potential to lead the way in defining and benefitting from the billions of dollars of investments and commercial activities around autonomous systems.