Meaning Transformation in AI-based Systems
By Beth Cardier, TAS Fellow
How can we know whether a transfer of meaning between a person and a machine is meaningful enough for control which is morally responsible?
This question was posed by NATO’s Science and Technology Organisation in a workshop run in collaboration with the Netherlands Organisation for Applied Scientific Research (TNO) and Trusted Autonomous Systems (TAS). The topic was Meaningful Human Control of AI-based Systems – Key Characteristics, Influencing Factors and Design Considerations (further details are below.) On this panel, Dr Kate Devitt and myself spoke on the complexity of determining meaning in an open world.
From my perspective as a narrative analyst, the question revolves around the transfer of information between humans and machines. When humans communicate with each other, one person must successfully transfer what’s in their head to another person’s mind in order for correct interpretation to be possible. Humans also draw on surrounding contexts for accurate interpretation, using an awareness of past, future, and theory of mind to receive the correct meaning. When humans transfer information or instructions to machines, however, machines are not able to process these complexities, so key information can be lost.
There is also a problem of asynchronicity in human-machine teams, unless the machine can genuinely adapt. During the machine’s construction, engineers created the channels of mean-production it would use long before it even met its human co-worker. The human has to compensate for this lag, compensating for a machine that relies on pre-determined meaning structures and other, out-of-sight contexts related to its design.
An ordinary user understands this disconnect, even though they might not consciously analyse it. It is why machines still receive aghast reactions on social media when they try to emulate humans, regardless of how amusing or charming their presentations.
This is one of the reasons I focus on adaptive communication in my research. My goal is to introduce a new suite of information structures to human-machine information exchange so that our technologies can be more flexible. When humans exchange information with each other, narrative emerges because the language tokens we use are inadequate. Stories form bridging structure between you and I, or between past and present, and are the means of bringing another person along when our circumstances exceed an initial plan. Science has not fully exploited these adaptive information structures yet.
For example, consider these transformations. Information can have properties of water, taking the shape of the vessel it fills, that vessel being context. Or information can have properties of a vine, reaching beyond itself to grow from one place to the next. Or maybe information is a knife, dividing the swim of experience into discrete objects, and then cutting from such a different angle that reinterpretation is required. We want our machines to act reliably in the open world but the world itself is not reliable. How can we maintain meaningful communication with machines, when accurate interpretation requires a vine or a knife or a river?
At a Glance
- Workshop: NATO’s Science and Technology Organisation ran a workshop in collaboration with the Netherlands Organisation for Applied Scientific Research (TNO) and Trusted Autonomous Systems (TAS).
- Focus: Meaningful Human Control of AI-based Systems – Key Characteristics, Influencing Factors and Design Considerations
- When: October 27, 2021
- Where: Berlin and online.
- Speakers: Dr Daniele Amoroso of the International Committee for Robot Arms Control, Dr Le on Kester of TNO, Dr Luciano Cavalcante Siebert assistant professor at the Interactive Intelligence Group at Delft University of Technology, Dr Kate Devitt from TAS and Dr Beth Cardier, from Griffith University and TAS.
Image: Future Memory, Oil on linen, 122 x 137 cm, 2021 by Kathryn Brimblecombe-Fox