Introducing Mark Brady, Director of Autonomy Accreditation – Land

Society is increasingly reliant on autonomous systems and artificial intelligence (RAS-AI). Along with changes to communication, mobility, and technology, RAS-AI in the land domain will bring many changes to the physical landscape and architecture of cities. Despite the physical limitations of current transport infrastructure, carriageway width and lane marking, there will be the capacity to vastly increase vehicular throughput of many longer, narrower and faster vehicles over a given area of roadway without changing the size of the road. Vehicles may not even need windows except for viewing scenery and traffic jams may be a historical memory in the fully integrated smart cities of the future. Private ownership of vehicles themselves may eventually give way to mobility as a service with houses no longer needing space wasted on a ‘bedroom for the car’ and autonomous vehicle service fleet companies becoming the new industrial giants of the 21st Century. Trust will be a crucial factor in the development and sustainability of these systems.

With human-machine interaction there is always a point where the decision-making process of the human can be called to account for their action or inaction. This point is not always as clear-cut for RAS-AI; and the ability of society to examine a decision after the fact is particularly relevant in situations where harm has occurred. Autonomous vehicles have the potential to make life and death decisions in place of human beings. Unlike traditional robotic systems that were basically static, fixed to the ground with a given operational design domain, autonomous land vehicles may be highly mobile, heavy, and capable of inflicting harm throughout their operation or deployment. As the potential for harm rises, so does the need to assure that their operation, and failure, occur in ways that are predictable so that humans may make allowances for such behaviour during their operation.

Accordingly, the ability to accredit the operational domain or domains of autonomous land vehicles is necessary to foster and maintain trust in these systems. Establishing trust in RAS-AI requires these systems to be predictable, explainable, and ultimately, accreditable. Predictability is therefore the first step towards building trust, where an ability to understand the ultimate outcome of an open-ended RAS-AI decision-making process becomes vital. However, prediction may not always be possible, and in such cases, explainability allows society to understand why the RAS-AI followed a particular behavioural pathway. It will also be necessary to accredit the operational capabilities of RAS-AI to foster and maintain trustworthiness. RAS-AI might be accredited within a specific operational domain, as a level of safe operation, or as a combination of other factors. As the body of knowledge surrounding trust in autonomous systems is just beginning to be understood, there is now a significant need to clarify parameters of trust in RAS-AI.

It is with this in mind that the TASDCRC now introduces our third Director of Autonomous Systems Accreditation, Mark Brady. Mark is an expert in the area of regulation for autonomous land vehicles with a focus on establishing a roadmap for assurance and accreditation of autonomous land-based technology. Mark’s research into regulation for disruptive technology focused on autonomous land vehicles as a case study examining the regulatory impact these technologies have on law. Mark brings a wealth of experience as a researcher and academic at the University of Adelaide and as a solicitor working in Queensland. These skills will help Mark to foster cooperation between researchers, regulators and stakeholders to encourage confidence and investment in the development of automated land vehicle technology in Queensland and throughout Australia as it looks to become a world leader in many areas of autonomous technology.

Mark joins Rachel Horne (Maritime), and Tom Putland (Air) to develop a national body of knowledge including methods, policies, and practices to support accreditation. Directors address issues experienced by regulators, insurers, and autonomous technology developers by producing consistent (yet flexible) parameters for safe and trusted operations and improved agility to meet fast-changing technical and social licence needs. Autonomy Accreditation forms a significant part of the Centre’s Assurance of Autonomy Activity that aims to create a trusted environment for test, risk analysis and regulatory certification support of autonomous systems and establish an independent world-class assurance service to global industry based in Queensland.

L-R, Mark, Rachel & Tom

Introducing Tom Putland, Director of Autonomy Accreditation – Air

With the surging use of highly automated remotely piloted aircraft systems (RPAS) and the prospect of ubiquitous drone-based delivery from the likes of Wing, Matternet, Flirtey and others, the question of how to perform air traffic management for drones, to prevent both unmanned-on-unmanned and unmanned-on-manned conflicts is a complicated one.

It’s clear that there are different societal expectations for the safety of two large wide body aircraft with hundreds of fare-paying passengers onboard colliding with one another compared to two small unmanned aircraft colliding with one another. Society may be willing to invest significant cost to ensure two commercial public transport aircraft do not collide, however society would not be willing to expend the same resources to prevent two drones from colliding.

To complicate this further, there are likely to be orders of magnitude more drones than manned aircraft, operating in close proximity, undertaking a range of different operations that may require approval at a moment’s notice. Without the ability to rely upon the human eye onboard to undertake see and avoid functions, this problem lends itself towards an autonomous, system of systems solution.

As the demand for such an Unmanned Aircraft System Traffic Management (UTM) system increases, the highly intertwined technical, legal and societal issues associated with a UTM need to be solved. The regulation and governance related to design, manufacture, certification and the continued operational safety of these autonomous systems requires a collaborative approach from society, regulators, academia and the aviation industry to ensure that trusted, safe, equitable and efficient UTM systems are developed for all parties.

It is with great pleasure that the Centre can announce the appointment of Tom Putland as Director of Autonomy Accreditation – Air, effective Monday 2 November.

Tom has worked at the Civil Aviation Safety Authority (CASA) for the past seven years, five of which were spent in the realm of RPAS focusing on RPAS airworthiness and overarching safety and risk management policy for CASA. Tom has also played a crucial role in the assessment and approval of complex RPAS operations.

Tom has been an Australian representative at the Joint Authorities for Rulemaking on Unmanned Systems (JARUS) for the last three years and has actively contributed to the development of the JARUS Specific Operations Risk Assessment(SORA), a globally recognised risk assessment tool for RPAS operations.

In these times of rapid technology development with respect to RPAS, UTM and automation, Tom is ideally placed to bridge the gap between regulators, the industry, society and academia to create a harmonised body of knowledge to facilitate faster, more efficient and safer certification of autonomous aircraft in Australia and around the world.

Tom becomes our second Director of Autonomy Accreditation, joining  Rachel Horne (Maritime), to develop a national body of knowledge including methods, policies, and practices to support accreditation. Directors address issues experienced by regulators, insurers, and autonomous technology developers by producing consistent (yet flexible) parameters for safe and trusted operations and improved agility to meet fast-changing technical and social licence needs.

Autonomy Accreditation forms a significant part of the Centre’s Assurance of Autonomy Activity that aims to create a trusted environment for test, risk analysis and regulatory certification support of autonomous systems and establish an independent world- class assurance service to global industry based in Queensland.

Are Autonomous Weapons Systems Prohibited?

Authored by: Rain Liivoja, Eve Massingham, Tim McFarland and Simon McKenzie, University of Queensland

The incorporation of autonomous functions in weapon systems has generated a robust ethical and legal debate. In this short piece, we outline the international law framework that applies to the use of autonomous weapon systems (AWS) to deliver force in an armed conflict. We explain some of the reasons why using an AWS to deliver force is legally controversial and set out some of the ways in which international law constrains the use of autonomy in weapons. Importantly, it explains why users of AWS are legally required to be reasonably confident about how they will operate before deploying them. We draw on the work that the University of Queensland’s Law and the Future of War Research Group is doing to examine how the law of armed conflict (LOAC), and international law more generally, regulates the use of autonomous systems by militaries.

AWS are not prohibited as such

According to a widely used United States Department of Defense definition, an AWS is ‘a system that, once activated, can select and engage targets without further intervention by a human operator’. International law does not specifically prohibit such AWS. Some AWS might be captured by the ban on anti-personnel land mines or the limitation on the use of booby-traps, but there exists no comprehensive international law rule outlawing AWS as a class of weapons.

There are those who argue, for a variety of reasons, that an outright ban ought to be placed on AWS. We take no position in relation to that argument. We note, instead, that unless and until such time as a ban is put in place, the legality of AWS under international law depends on their compatibility with general LOAC rules, especially those that deal with weaponry and targeting.

AWS are not necessarily inherently unlawful under LOAC

Independently of any weapon-specific prohibitions (such as the ban on anti-personnel land mines), LOAC prohibits the use of three types of weapons:
• weapons of a nature to cause superfluous injury or unnecessary suffering (Additional Protocol I to the Geneva Conventions (API) article 35(3); Customary International Humanitarian Law (CIHL) Study rule 70);
• weapons of a nature to strike military objectives and civilians or civilian objects without distinction (API article 51(4); CIHL Study rule 71);
• weapons intended or expected to cause widespread, long-term and severe damage to the natural environment (API article 35(3); see, eg, CIHL Study rules 54 and 76).

Weapons falling into any of these categories are often described as being ‘inherently unlawful’. LOAC requires States to make an assessment about whether a new weapon would be inherently unlawful (API article 36). The relevant test scenario is the normal intended use of the weapon. This means that being able to envisage a far-fetched scenario in which superfluous injury or indiscriminate effects could be avoided does not make the weapon lawful. Conversely, the possibility of a weapon causing superfluous injury or indiscriminate effects under some exceptional and unintended circumstances does not make the weapon inherently unlawful.
A specific AWS, just like any other weapon, can fall foul of one of these three general prohibitions. Such an AWS would then be inherently unlawful, and its use would be prohibited. But it is impossible to say that all AWS would necessarily be prohibited by one of these three principles. In other words, we cannot conclude that all AWS are inherently unlawful.
We note, at this juncture, the Martens Clause, contained in many LOAC instruments, whereby in the absence of specific treaty rules ‘civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.’ There are considerable disagreements about the interpretation of this clause generally, as well as its significance to weapons. Considering that the general rules concerning weapons (outlined above) have now been codified in the Additional Protocols, it is our view that the Martens Clause has limited relevance to weapons post 1977.

AWS must be used in compliance with LOAC

LOAC imposes obligations to select and deploy weapons so as to limit the effects of armed conflict. These legal obligations are held by individuals and States. This has been expressly acknowledged by the Group of Governmental Experts on Lethal Autonomous Weapons Systems, who have noted that LOAC obligations are not held by AWS. It is up to the individual(s) using an AWS, and the State that has equipped them with the system, to make sure that they are doing so consistently with their legal obligations.
There are three key LOAC principles with which the user of an AWS must comply:
• The principle of distinction requires belligerents to direct their attacks only against lawful military objectives (combatants, members of organised armed groups, other persons taking a direct part in hostilities, and military objects) and not to direct attacks against civilians or civilian objects (API articles 48, 51 and 52; CIHL Study rule 1).
• The principle of proportionality requires that attacks cause no harm to the civilian population (‘collateral damage’) that was foreseeably excessive in comparison to the military advantage anticipated from the attack (API article 57(2)(b), CIHL Study rule 14).
• Those who plan or decide upon attacks must take feasible precautions to verify the lawfulness of targets, to minimise collateral damage, and, where possible, to give advance warnings to affected civilians (API article 57(2); CIHL Study rule 15).

These are obligations which related to attacks. Attacks comprise acts of violence deployed in both offensive and defence capacities (API article 49(1)) which may or may not have lethal effect. Therefore, these three rules are applicable to all attacks – regardless of whether or not they are offensive or defensive and regardless of whether they are ultimately lethal.

There is also an overarching general duty to exercise “constant care” to spare the civilian population, civilians, and civilian objects in all military operations (API article 57(1), CIHL Study rule 15).

These principles seek to balance the often competing interests of military necessity and humanity. They are not easy to apply in practice. In particular, in relation to precautions in attack, there are legitimate questions about how precautions ought to be taken when a weapons system has autonomous capabilities. What is clear, however, is that the obligation to take precautions is an obligation of the user of the AWS, not an obligation of the AWS itself.

The lawfulness of AWS use depends on the expected consequences

Compliance with the three LOAC principles mentioned above is straightforward when simple weapons are used in the context of close combat. The principles of distinction and proportionality do not pose much difficulty when a combatant stabs an enemy with a bayonet or shoots them at a close range.
Warfare has, however, evolved so that many attacks are carried out by means of increasingly sophisticated technology. Thus, in many instances, compliance with distinction and proportionality entails using technology in the expectation that it will lead to outcomes normally associated with the intended use.

Under current law, combatants can rely on complex weapon systems, including AWS, if they can do so consistently with their LOAC obligations. This means a lot rides on just how confident the combatant is that the weapon will behave as expected in the circumstances of use. The potential complexity of AWS raises some thorny technical, ethical and legal questions.

LOAC has never been prescriptive about the level of confidence required, but it is clear an individual need not be absolutely sure using a particular weapon will definitely result in the desired outcome. A ‘reasonable military commander’ standard, articulated in the context of assessing proportionality of collateral damage, would arguably be applicable here. Consider a military commander who deploys a tried and tested artillery round against a verified military objective that is clearly separated from the civilian population. The commander will not be held responsible for an inadvertent misfire that killed civilians. It is also clear that there will be ‘room for argument in close cases’ (Final Report to the Prosecutor by the Committee Established to Review the NATO Bombing Campaign Against the Federal Republic of Yugoslavia, para 50). Indeed, a number of States have clarified that military decisions must only be held to a standard based on the information ‘reasonably available to the [decision-maker] at the relevant time’ (eg. API, Reservation of the United Kingdom of Great Britain and Northern Ireland, 2 July 2002, (c)).

The situation is made more difficult by the challenge of predicting how an autonomous system will behave. Some AWS may be such that a user will never be able to be confident enough about the outcome that its use would achieve in the circumstances of the proposed attack. The use of such a weapon would therefore never be lawful. However, it is difficult to ascertain this without looking at a specific system. It seems clear to us that the lawful use of AWS under LOAC will require a comprehensive approach to testing and evaluating the operation of the system. In addition, the individuals using the weapons will have to be equipped with enough knowledge to understand when the use of an autonomous system will perform consistently with the LOAC obligations of distinction, proportionality and precautions in attack. Put simply, the operator/s has to understand what will happen when they use the system.

This level of understanding is clearly possible. A number of weapons systems with a high level of autonomy are already in use (e.g. the Aegis Combat System used on warships) in environments where there is evidence-based confidence about how they operate. This confidence allows the individual responsible for their deployment to comply with their legal obligations.

Some autonomous systems will be able to be used in a variety of contexts. Where this is the case, the lawfulness of a decision to deploy the system in any one case will depend on an assessment of the both the system’s capabilities as well as the environmental conditions.

The following example illustrates this point.

Scenario

A commander wishes to deploy a land-based AWS that is designed to identify and target vehicles that are armed. The system has been tested and approved by the State pursuant to its Article 36 legal review obligations. The system cannot clearly identify persons. The commander understands the system and its capabilities. The environment will dictate how it can be used – with what precautions, according to what parameters, on what setting.

Environment – Designated geographic zone that is secured and patrolled such that civilians are not within the area. Possible implications for lawful use of the AWS – legally unproblematic and operationally valuable.

Environment – area with heavy military presence. Intelligence suggests farmers in the area routinely carry small arms and light weapons for use against predatory animals. Possible implications for lawful use of the AWS – could be facilitated by specific settings and parameters in order to take into account the environment and context in which it is operating.

Environment – Dense urban setting with high rates to civil unrest and crime. Possible implications for lawful use of the AWS – would entail a significant degree of risk to civilians and would be legally problematic.

The environment and circumstances are key

There is no hard and fast rule that an AWS can be lawfully used in a particular context. The user must evaluate the system to ensure that, in the circumstances in which they seek to deploy it, they will be complying with their LOAC obligations.

This evaluation will come from understanding the weapon and how it operates, understanding the environment in which they wish to deploy it, and understanding the task for which they wish to deploy it. It may be possible to lawfully balance the military advantage that comes from high levels of automation with the risks of unintended consequences in one environment, and yet not in another environment. For example, sea mines, by virtue of the environment of their use, are able to have a high level of automation and still comply with the LOAC principles outlined above. However, if that same level of automation was to be used in a similar weapon on land this may result in greater likelihood of a violation of the LOAC principles.

At its heart, delegation of the use of force to an autonomous system requires those that decide to deploy them understand the consequences of their operation in the environment of use. This means that there will be no clear-cut answers, and the legality of a decision to deploy an AWS will depend on the context.

________________

This blog post is part of the ongoing research of the Law and the Future of War Research Group at the University of Queensland. The Group investigates the diverse ways in which law constrains or enables autonomous functions of military platforms, systems and weapons. The Group receives funding from the Australian Government through the Defence Cooperative Research Centre for Trusted Autonomous Systems and is part of the Ethics and Law of Trusted Autonomous Systems Activity.

The views and opinions expressed in herein are those of the authors, and do not necessarily reflect the views of the Australian Government or any other institution.

For further information contact Associate Professor Rain Liivoja, Dr Eve Massingham, Dr Tim McFarland or Dr Simon McKenzie.

Assuring Automation Smarter

In identifying the key disruptive technologies of the future, drones appear on most major industry assessments as a top disrupter. Around the world, countries are scrambling to lead these developments with Queensland and Australia leading innovators and investors in drones, robotics and autonomous systems. Applications for drones exist in industries such as offshore energy, defence and science for “dull, dirty, dangerous and distant” operations. Independent industry research estimates the global drone market is expected to grow to $100 billion by the end of 2020.

But there is a problem.

The problem is that current assurance and certification processes are not well suited to unmanned systems and can take many months or even years. Coupled with low sales volumes, uncertainty and delays can significantly stifle growth and adoption. The accreditation of robotics and autonomous systems is done on a case by case basis and often by exception rather than streamlined.

To address this challenge, Biarri, Queensland University of Technology (QUT) and the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC), with financial support from Advance Queensland (the TASDCRC and this project receives funding support from the Queensland Government) and domain input from QinetiQ and Australian regulatory authorities, are designing software tools to accelerate and simplify the assurance and certification process for AI enabled and unmanned systems. These types of software tools have recently revolutionised many other industries and so the timing is right to accelerate innovation in the regulation industry via digital transformation.

By enabling faster and more efficient assurance and accreditation processes, our platform will facilitate innovation in a way which will catalyse the next wave of autonomous systems to solve large scale global challenges.

Why Now?

AI and autonomy are introducing new challenges for regulators, insurers, technology developers and system operators – with current methods, policies and practices lagging behind technical advances. Regulatory frameworks in Australia for domains like air and sea represent a mostly static, top-down regulatory compliance model of a public authority regulating under a legislative scheme.

As companies try to certify their new unmanned autonomous systems, facing uncertainty and lengthy delay, it is clear that we require new approaches to assuring land, maritime and air based pilotless vehicles, vessels and aircraft that are sustainable and effective. Owners and operators need to be able to update software and hardware to meet changing operational and regulatory requirements on the fly and not have to resubmit their systems for regulatory reapproval when they do.

The tools that will address this are part of the #RegTech digital transformation, which is based on changing the regulation processes via third generation computational regulatory regimes using digital technologies within digital environments. Such systems already exist in the finance industry, e.g. regulatory sandboxes established for fintech innovators (Allen, 2019; Piri, 2019).

A similar, flexible regulatory environment to what has been done in fintech and the supporting tools will enable the faster assurance and certification of aircrafts, vessels and vehicles driving greater innovation without compromising safety. Although potentially game-changing, building digital and physical sandboxes to work within traditional state-centred regulation is difficult and so it requires the specialised skills – skills which Biarri and QUT, in combination with its project partners, will bring to this endeavour.

What is the goal?

This project is seeking to answer the above challenges by bringing together world leading expertise from a number of partners such as Biarri and QUT’s Faculty of Law as well as experts in the appropriate regulatory environments via the TASDCRC. The goal is to create an integrated system of agile regulatory methods and AI audit tools, in order to simplify and streamline the interaction between the regulator and the regulated. These tools will dramatically shorten the current time it takes to assure and accredit unmanned systems, without compromising trust or safety, therefore providing a high level of value for businesses who need to quickly prototype unmanned systems that incorporate AI core capability.

This two-sided digital platform will also enable the operational parameters and software of unmanned autonomous systems to be rapidly adapted in response to changes in regulatory, legal, ethical and societal needs. The developed software will accelerate innovation and positive commercial outcomes by lowering the barrier to deployment for companies developing unmanned autonomous systems – without compromising trust and safety.

 

Anticipated Outcomes and Benefits

We expect there to be a number of benefits from the new digital platform. They include:
● World leading capability to deliver faster product development lifecycles for companies to safely test AI enabled unmanned autonomous systems.
● A world-first two-sided digital tool that integrates regulators with the regulated to increase the speed for organisations to reach a safe and trusted deployable state.
● A digital toolkit to allow the auditing and tracking of autonomous and AI based developments and simplification of reporting requirements for companies.

With these benefits Australia has the potential to lead the way in defining and benefitting from the billions of dollars of investments and commercial activities around autonomous systems.

MCM in a Day – Ministerial Release – Revolutionising future mine countermeasure technology

Joint media release, Minister for Defence, Senator the Hon Linda Reynolds CSC & Minister for Defence Industry, Senator the Hon Melissa Price

New autonomous technologies will revolutionise mine clearance capability in operations close to shore through a new five-year, $15 million research and development project.  The project is part of a new partnership between Defence, Australia’s Trusted Autonomous Systems Defence Cooperative Research Centre (TAS DCRC) and Thales Australia.

https://www.minister.defence.gov.au/minister/melissa-price/media-releases/revolutionising-future-mine-countermeasure-technology

Introducing Rachel Horne, Director of Autonomy Accreditation – Maritime

As investment in AI robotics and autonomous systems ramps up, it is vital that these systems can be tested and assured to meet society’s expectations for their safety, trust and reliability.

However, there are no established standards systems or formal comprehensive codes of practice to accredit unpiloted, unmanned autonomous systems. Operations are generally permissible only under highly conservative constraints, authorised as an exception waiver by CASA, AMSA, DASA etc.

The lack of accreditation standards or codes of practice represents lost opportunity for innovators, for example, according to a US report (Jenkins & Vasigh, 2016) ‘every year that integration [of drones] is delayed, the United States loses more than $10 billion in potential economic impact. This translates to loss of $27.6 million per day that UAS are not integrated into the NAS [national air space]’.

The reason why there are so few standards, best practices or test and evaluation protocols for autonomous systems is a chicken-and-egg problem. The impetus to establish and maintain technical and social licence for autonomous systems is driven by end user demand. But end user demand is dependent on trusting technical standards and social licence to operate. Breaking out of this chicken-and-egg problem to advance the autonomous systems pipeline requires collaborative investment—beyond the reach of individual autonomous systems developers and business users.

The development of best practice policy, appropriate standards, and a strong accreditation culture has the potential to enhance innovation and market growth for drones with autonomous abilities.

Queensland is leading the way by investing in Australia’s capacity for translating autonomous systems innovation into deployments through the Trusted Autonomous Systems Defence CRC. To this end Directors of Autonomy Accreditation in Air, Land and Maritime domains will develop a national body of knowledge including methods, policies, and practices to support accreditation. Directors will address issues experienced by regulators, insurers, and technology developers by producing consistent (yet flexible) parameters for safe and trusted operations and improved agility to meet fast-changing technical and social licence needs.

It is with great pleasure that the Centre can announce the appointment of Rachel Horne as Director of Autonomy Accreditation – Maritime, effective Monday 17 August.

Rachel brings a wealth of experience and expertise in Australia’s maritime regulatory framework. She joins us from the Australian Maritime Safety Authority, where she has spent the last eight years in legal, regulatory and policy teams providing advice and managing projects aimed at improving the domestic regulatory framework.

Rachel is a subject matter expert on the domestic regulation of remotely operated and autonomous vessels and is invested in working to mature the assurance and accreditation frameworks available to better facilitate the safe and efficient update of autonomous technologies. The benefits this technology will bring from a safety, environmental, and efficiency perspective make this an important and timely undertaking.  More on Rachel on LinkedIn

The Centre is currently advertising for Directors in Air and Land domains, more information here.

Reference:

Darryl Jenkins & Bijan Vasigh, Commercial Drone Use Will Benefit the US Economy, from Drones Ed by Tamara Thompson, Page 150. Jan 2016. Greenhaven Press.

New Centre Research Leadership Structure – 1 July 2020

In response to the external review led by Prof Ian Chubb in February 2020 and in light of new growth in projects and funding, the Trusted Autonomous Systems Defence Cooperative Research Centre is realigning key roles as the Centre expands 

From 1 Jul 2020, Simon Ng will be Chief Engineer (CE) and Kate Devitt will be Chief Scientist (CS). 

Research leadership will continue with the triumvirate of Jason Scholz, Kate Devitt and Simon Ng—each bringing their unique and complimentary backgrounds and experience to accelerate sovereign capability in trusted autonomous systems for the Australian Defence Force (ADF).  

Chief Engineer – Associate Professor Simon Ng 

Simon graduated from Monash University in 1998 with BSc, BEng and PhD (Eng)He began his career as a Post-Doctoral Fellow at CSIRO before joining Defence Science & Technology (DST) in 2001, where he developed techniques for multi-sided non-zero sum techniques for strategic experimentation. In 2004, Simon moved to DST in Melbourne, specialising in applying systems engineering and systems thinking methodology to a broad range of programs in support of Defence surveillance and response capabilities, Defence space operations and the development of autonomous aerial systems. He was formerly DST’s Group LeaderJoint Systems Analysis and Aerial Autonomous Systems, and Associate Director of the Defence Science Institute. He remains a DST scientist one day a week, fulfilling the role of Australia’s National Lead on The Technical Cooperation Program Technical Panel AER TP-12, UAS Integration into the Battlespace. Simon is a Graduate of the Australian Institute of Company DirectorsHe brings this wealth of experience to the role of Chief Engineer at Trusted Autonomous Systems.  

Chief Scientist – Dr Kate Devitt 

Kate Devitt graduated from Melbourne University with BA(hons) history and philosophy of science and psychologyAfter working with Accenture on the CAMM2 Project for Defence she started her PhD at Rutgers, The State University of New JerseyKate has used her expertise in cognitive science, epistemology and ethics to lead transdisciplinary teams building decision support tools for industry as well as co-founded a startup (mentored through MIT REAP). In 2018 she joined DST as a social and ethical robotics researcher and maintains a part time permanent position with DST two days a week alongside her role as Chief Scientist of the TASDCRC. Kate is currently assigned to ADF Covid-19 Task Force providing specialised advice regarding social and ethical aspects of data, technology and AI systems that may be considered, developed and employed as part of the Operation. She is Australia’s representative for the TTCP AI Strategic Challenge and is contributing to NATO and UN discussions regarding frameworks for human control of robotics and autonomous systems. Kate is leading the ’Trust and Safety’ chapter for Australia’s Robotics Roadmap (V.2). She is co-editor of Good Data (2019) with realistic methods on how data can be used to progress a fair and just digital economy and societyShe is also a research fellow with the Co-Innovation Research Group at the University of Queensland, an inter-disciplinary research group crossing conventional boundaries comprising social robotics, interaction design, software engineering and human-computer interaction. Kate is passionate about how autonomous systems should be designed within the large sociotechnical systems within which they are built and deployed, particularly ethical, legal, and regulatory structures to achieve social license to operate and trusted adoption.  

CEO – Professor Jason Scholz 

Jason is the Chief Executive Officer at Trusted Autonomous Systems and also contributes research leadership in the Decision Sciences. His formal background in Electrical and Electronic Engineering (Bachelors Degree and PhD), and over 30 years experience in AI and decision-making covering all areas of Command, Control, Communications and Intelligence (C3I).  

Jason was Exchange Scientist with the US Air Force Research Lab in New York stateHe was research leader for the Vital Planning and Analysis (VIPA) system which saved Defence well over $120m and is in operational use. Prior to commencing with Trusted Autonomous Systems, he was responsible to the Chief Defence Scientist for the Strategic Research Initiative in trusted autonomous systems, and to the five-eyes community as chair of the Autonomy Strategic Challenge which culminated in 2018 in Autonomous Warrior 18, the largest trial of autonomous systems in air, land, sea environments ever conducted. Jason is a graduate of the Australian Institute of Company Directors and a tenured Innovation Professor with RMIT University. Jason is passionate about building sovereign capability for the Nation, and ensuring the high-impact research, development and innovation of the Centre transitions for operational use by the ADF.  

TASDCRC – Advance Queensland Research Fellows

The Advanced Queensland Trusted Autonomous Systems Defence CRC Fellowship program provides three year grants to the best researchers in Queensland, Australia, allowing them to pursue innovative and transformational research projects aimed at tackling open problems in AI and autonomy. The Centre grants, and the opportunity the Centre offers for its Fellows to engage directly with critical stakeholders in Defence and industry, maximises the chance of translation of the work into practice.

Trusted Autonomous Systems is pleased to announce the appointment of three new Research Fellows who, individually and collectively, personify the strength of Queensland’s publicly funded innovation sector.  We are thrilled to have them on board and look forward to watching their projects grow and bear fruit.

Dr Andrew Back (UQ) is developing new techniques for extracting models of meaning from limited data. His approach, Synthetic Language and Information Topology (SLAIT) AI, promises to dramatically accelerate the ability of a machine to understand and extract information from novel data and situations using algorithms that can build models of information contained in that data with very limited samples. He has already applied his work to building models of language in text using very small samples and detecting unauthorised alterations that don’t fit with those models.

Dr Jessica Korte (UQ) is applying her expertise in Auslan (Australian sign language) to create technologies that support rich, real-time visual-gestural communication between machines and humans. Jessica’s research will develop the underlying theory associated with visual sign  and implement this within a technology pipeline that promises to improve the lives of the deaf community in Australia, but that will also directly support the Centre’s desire to allow Australia’s soldiers to meaningfully interact with and control robots in the field.

Dr Pauline Pounds (UQ) believes in solving challenging problems using the simplest methods possible, and is a leading proponent in embodied intelligence, where difficult cognitive problems are resolved by exploiting a robot’s embodiment. Pauline’s research project involves developing and exploiting robotic whisker technology to allow autonomous systems to move and avoid obstacles in highly turbulent environments or in highly constricted indoor spaces without needing to use heavy cameras or complex, expensive lidar systems.

You can read more about their projects on our website at www.tasdcrc.com.au/fellows. This blog will continue to bring you updates on and announcements about the Fellows’ work. Blog announcements will be reposted on TASDCRC Twitter and LinkedIn so you don’t miss anything!

Welcome to the TASDCRC Game-Changer Blog

Dear Readers,

To ensure timely updates on Trusted Autonomous Systems work we are starting this Blog version of Game-Changer news for public information. Posts will generally be accompanied by Twitter @tasdcrc or LinkedIn TASDCRC releases. Given the nature of our Centre, bringing together Defence, Industry and Research for the benefit of the ADF we also welcome Game-Changer contributions from our Participants to provide broader awareness of progress and opportunities.

After nearly 12-months at the helm of the Centre I am proud of our achievements. Research and development is well underway and although most projects are in their early stages, much promise has been demonstrated. Engagement of Defence at all levels has increased, with new technologies being accelerated and developed under a new direct ADF funding model. We now have many Research Fellows and Research Associates attached to the Centre’s “common good” activities as well as projects and we will be featuring them and their fields of research through this Blog.

The Centre was subject to an external detailed review in February 2020 by an expert panel led by Prof Ian Chubb (ex Chief Scientist of Australia). The outlook of that report now received by the Centre’s board, is highly supportive and encouraging Centre growth as a trusted Defence partner. I expect to advise more detail on the outcomes and directions from that soon.

Unfortunately it was necessary to postpone our inaugural Symposium in May 2020 due to COVID-19, but we look forward to hosting this event 27-29 April 2021 in Townsville. The Centre has been engaged in many Webinars and seminars over the last few months, directly and through our common-good Activities. These have included, leading the Australian Robotics Roadmap development for Defence; as well as the Safety and Trust chapters. Also, recently in support of the Centre for Naval Analyses Seminar with the US, where I had the privilege of being on a panel with former US Deputy Secretary of Defence the Honorable Robert Work, former IARPA Director Dr Jason Matheny and Dr Lawrence Lewis of CNA in Washington DC.

I look forward to keeping you posted more regularly on the work of our Centre.

Professor Jason Scholz
Trusted Autonomous Systems CEO