Ensuring that humans don’t become a machine's moral crumple zone

Ensuring that humans don’t become a machine's moral crumple zone

News - 20 December 2023 - Webredactie Communication

How do we design AI systems so that humans retain enough control? Years of work by Delft researchers and international colleagues have resulted in the first handbook on 'meaningful human control' for systems with autonomous properties. David Abbink, professor of haptic human-robot interaction and scientific director of TU Delft's interdisciplinary research institute AiTech, and Geert-Jan Houben, pro-vice rector of AI, Data and Digitalisation and leader of the TU Delft AI Initiative, talk about  how Delft’s research into 'meaningful human control' should lead to more responsible development and implementation of systems with autonomous properties, and how Delft has taken a leading role in this worldwide.

Jeremy Banner loved both cars and computers, and his wife would later say that the two passions came together perfectly in his Tesla Model 3. 50-year-old Banner was driving his Tesla down a county road in Florida (US) on 1 March 2019. He had switched on the autopilot just 10 seconds previously when a truck with a semi-trailer a little ahead of him suddenly turned onto the road from a side road. It was travelling at a much lower speed, and the Tesla shot under the semi-trailer at full speed. The roof ripped off, and Banner was killed.

Jeremy Banner's tragic accident is one of many examples that raise a question in a world where more and more decisions previously made by humans are being placed in the hands of AI systems: how much control do humans still have over automatic or semi-automatic systems? Tesla may say that the human driver is always responsible, but how realistic is that when a car is running on autopilot and a sudden emergency occurs? How quickly can a human intervene?

Research on autonomous systems has now introduced the concept of 'meaningful human control'. Over the past decade, this concept has come to play an increasingly important role in TU Delft’s scientific research into systems with autonomous properties. With the advance of learning systems using artificial intelligence (AI) in everyday applications, its importance has only increased.

The concept of 'meaningful human control' sounds abstract, but I understand its importance for designing, building and using AI applications in the real world. Can you explain more?

Abbink: "At its core, 'meaningful human control' revolves around the view that responsibility should never be attributed to an 'autonomous system' but rather to people. There are actually no autonomous systems, only systems with autonomous properties that are developed, implemented, and used and abused by people. It’s important that human users feel the right level of responsibility, and have a realistic chance to intervene. This is when we speak of meaningful human control – a concept that ethicists, business experts, designers and engineers have to consider together. It is both scientifically and socially complex."

Basically, humans risk being used as a moral crumple zone: we automate what is possible, and the end user has to sort it out if something happens that the system cannot handle

David Abbink

We have long known from aviation how difficult it is to include responsibility and safety in system design, despite a solid safety culture within aviation. Pilots are trained for unexpected situations that may arise with the autopilot, but can this be easily translated to automated driving? Manufacturers of driver-assisted cars such as Tesla say that the driver is responsible at all times, but the system design greatly affects how likely people are to intervene in real life situations.

What you see in many semi-autonomous car accidents is that humans cannot realistically intervene in time. Basically, human beings risk being used as a moral crumple zone: we automate what is possible, and the end user has to sort it out if something happens that the system cannot handle. 'Meaningful human control' is all about how we can design systems so that both developers and users are more likely to be given enough knowledge, time and responsibility. This can ensure that the system's behaviour is ultimately socially acceptable."

Houben: "Asking if there is meaningful control over a system with autonomous properties is interesting from several perspectives: not only a philosophical perspective, but also engineering science and design perspectives. At Delft, we began exploring the concept of 'meaningful human control' in cars and aircraft, but we now see that it also plays a big role in AI systems such as medical diagnostics or recruitment of personnel. The different perspectives come together naturally here in Delft. We have engineers, software researchers, ethicists and designers, and domain experts in human-machine collaboration. On one hand, we make it easier for people to meet and think together about 'meaningful human control', and on the other hand we also consciously encourage further research."

What kind of research do you facilitate?

Abbink: "Within AiTech, five of our postdocs from different backgrounds have been researching 'meaningful human control' over the past few years. Precisely because they came from different disciplines, they first had to work hard to integrate each other's methodologies. Every week they have met in what we call ‘agora’ meetings: open forums where anyone can come and contribute to the discussion. They also organised symposiums to bring together even more people with the same interests. From the very beginning, we wanted to write a kind of position paper setting out how to build systems that are under meaningful human control. A key question was how to operationalise a philosophical concept so that engineers can work with it. After three years, we have a joint paper that I am very proud of – it’s more than just a paper because it represents the concrete setting up of interdisciplinary research."

The efforts of the past few years have resulted in many more people in Delft embracing and using the concept of 'meaningful human control', including people who were previously purely concerned with AI, or energy, or healthcare,. Ultimately, we want to improve not only the control of a given AI-based system, but also the functioning of such a system in its environment.

Geert-Jan Houben

Houben: "The efforts of the past few years have resulted in many more people in Delft embracing and using the concept of 'meaningful human control', including people who were previously purely concerned with AI, or energy, or healthcare,. Ultimately, we want to improve not only the control of a given AI-based system, but also the functioning of such a system in its environment. For example, we are investigating the impact of AI-based recruitment systems and how we can ensure that the final system is under more meaningful control when developing algorithms, software and innovation processes."

How do you propose operationalising the concept of 'meaningful human control' so engineers can work with it?

Abbink: "In our position paper, we defined four features to help with this. The first is the moral operational design domain needed to operationalise meaningful human control. To illustrate, we chose two practical applications: automated persuasion and AI-based employee recruitment. Consider a semi-autonomous car, where you can define a technical domain within which the car drives autonomously, for example on the motorway. If an unexpected situation arises, such as a motorcyclist suddenly crossing in front, the human has to take over. We can therefore delineate a moral operational design domain based on the situations where this is needed. Equipped with this, we can delineate what humans are responsible for and what the machine is responsible for. We can also do the same with AI-based recruitment or other human-AI systems.

For a human-AI system to function well, both the people involved and the AI system must also possess a certain imagination relating to tasks, role divisions and desired outcomes, as well as to the environment and mutual opportunities and constraints. This is the second important feature. Such imaginations or representations are often called ‘mental models’, and are based on shared mental models of all people involved in the human-AI system. We recommend explicitly including conversations about this feature during the human-AI system development process – something that does not happen automatically."

Houben: "This is the point where all those different disciplines we talked about earlier come together. It is precisely where different scientific disciplines can start to reinforce each other."

What are the last two features needed to operationalise the concept of 'meaningful human control'?

Abbink: "The third feature is making sure that the responsibility assigned to a human being must be proportional to that human being's ability and authority to actually control the system. We believe that you can only be held responsible as a human being if you also have a practical and realistic ability to intervene. With this in mind, you cannot hold a driver of a semi-autonomous car responsible for an accident when he does not have enough time to regain control of the car from the autopilot.

Fourth and finally, the actions of the AI system should always be linkable to at least one person who is also aware of moral responsibility."

I imagine that AI software in particular can be so complex that linking to one person is very difficult…

Abbink: "Absolutely. We also specify 'at least one', and in practice this often means organisations and groups. Things can become diffuse when it comes to software that passes through many hands, and which spreads quickly and is modified quickly. Our point is that all organisations and people involved should also be aware of their responsibility during design or implementation. If not, you have to recognise that the system is no longer under meaningful control. You cannot pretend that it is, or pass it off as such to the end user."

What other results have you achieved?

Abbink: "In the fields of both mobility and recruitment algorithms, we have published many papers. For instance, we used simulation studies to show how people assign responsibility to automatic systems when driving. This shows that even when people know that you cannot reasonably hold the driver responsible, they still say the driver is responsible. It seems that people are unaware of the impact of automated driving on anyone’s ability to intervene when necessary. There is a gap between how ordinary people see responsibility and what is realistic in practice."

What can be done about this?

Abbink: "For cars, you can inform people better about the challenges that automation poses for drivers, and you can adapt driver training. With recruitment algorithms, we have explored how you can design systems in such a way that the people who use them, for example within human resources, better understand the basis used for decisions. They are then equipped to reject the advice."

What do you see as the scientific challenges for the future?

Abbink: "The four features we have described are a first important step towards fully operationalising the concept of 'meaningful human control'. We put these alongside research by colleague Simeon Calvert from Civil Engineering and Geosciences, who has been working on the structured conditions needed to operationalise the concept for automation in transport systems – not only on the road but also on water and in the air.

The big challenge we have not yet got around to as a community is really translating all of these features and conditions into practical system design. Ideally, you want to experience the impact of more or less meaningful control – and we have not yet got around to that either in Delft or worldwide. In the coming years, we want to use concrete experiments to show empirically that if you change the process parameters across manual work, technology, algorithms and even within an organisation, the experience of human responsibility changes in this way or that way. I call this 'where the rubber hits the road'. It’s what we as researchers are going to work with now."

Houben: “David has described the fundamentals, and at the same time, we would also like to know how the concept of 'meaningful human control' should be shaped and further developed in specific Delft application domains, such as energy, care and mobility. We already see that our research of the past few years is starting to trickle down to Delft’s 24 AI labs. Some of the postdocs who laid the foundations for the research into 'meaningful human control' have moved on to other positions in those labs."

How does Delft's work on meaningful human control compare with this field elsewhere in the world?

Abbink: "I think our contribution is in trying to stimulate international cooperation to achieve sharper debate, better research and more collaboration. We have achieved this concretely with international colleagues through our initiative writing the first handbook on meaningful human control. It brings together perspectives from ethics and philosophy, law and governance, and design and engineering, among others. The handbook will be published in December this year. In spring 2024, we are organising a symposium to bring together even more international researchers around the theme. Through all our activities, we are collectively thinking more holistically about meaningful control. People from different backgrounds and application areas have discussed and reviewed each other's papers to arrive at this book.

I feel there is a movement to achieve more debate and consensus outside our network as well, but we are not there yet. Some people still find the concept too little operationalised, while others think you should just legally cover responsibility. But I am confident that our typically Delft approach – integrating different perspectives and translating concepts into practical applications – will gain more ground."

More interview in this series

This interview is part of a series of interviews. Other interviews are an interview with Geert-Jan Houben and Jeroen van der Hoven about developing and maintaining values in AI and in a digital society and an interview with Geert-Jan Houben and Alessandro Bozzon about AI and design, and an interview with Arie van Deursen about AI and software engineering.

More about Meaningful Human Control

TU Delft is establishing a Centre for Meaningful Human Control over AI. Its main goal is to provide a robust understanding, connections, and success stories that can support shaping the future of AI development in a beneficial and responsible manner. Expected launch in Spring 2024.
Further reading:

More about AI at TU Delft

At Delft University of Technology, we believe that AI technology is vital to create a more sustainable, safer and healthier future. We research, design and engineer AI technology and study its application in society. AI technology plays a key role in each of our eight faculties and is an integral part of the education of our students. Through AI education, research and innovation we create impact for a better society. Visit our website to find out what is happening in AI research, education and innovation in AI, Data & Digitalisation: www.tudelft.nl/ai