Project 6

How can humans remain in control of artificial intelligence (AI)-based systems designed to have autonomous capabilities? Such systems are increasingly ubiquitous, creating benefits - but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been  proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans (e.g., users, designers and developers,  manufacturers, legislators). However, while there is some agreement concerning the need for some form of human control over AI systems, there are divergent and often conflicting views about what makes human control meaningful. In this project, we address fundamental questions on the concept of meaningful human control from a multidisciplinary perpsective.

Activities:

In this paper, we address the gap between philosophical theory and engineering practice by identifying four actionable properties which AI-based systems must have to be under meaningful human control. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human’s ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue these four properties are necessary for AI systems under meaningful human control, and provide possible directions to incorporate them into practice. We illustrate these properties with two use cases, automated vehicle and AI-based hiring. We believe these four properties will support practically-minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control and responsibility.

  • Epistemic requirements for meaningful human control. Team members: Stefan Buijsman, Herman Veluwenkamp. 

An important pre-condition for meaningful human control is that humans and AI agents have compatible representations, and that humans are aware when they should take additional actions to reach the desired outcome. This project investigates these epistemic requirements to enable meaningful human control. What information can, and should, we give to users for them to be in control? Within this context we look at questions of explainability, of information that can help users determine their trust in intelligent systems, and at information that can inform users when interventions are necessary. 
On the information that can help users determine when to intervene, see Buijsman, S., & Veluwenkamp, H. (2022). Spotting when algorithms are wrong.
 

  • Multidisciplinary Research Handbook on Meaningful Human Control over Artificial Intelligence Systems. Editorial team: Jeroen van den Hoven, David Abbink, Filippo Santoni de Sio, Luciano Cavalcante, Siebert, Daniele Amoroso, and Giulio Mecacci. Expected: 2023.

This handbook book represents the first encompassing outlook on the concept of meaningful human control, incorporating three main disciplinary perspectives: (i) philosophy and ethics, (ii) law and governance, and (iii) design and engineering. In so doing, we do not pretend to unify the debate or, even less so, to defend or partake in one particular notion of meaningful human control. Rather, we aim to combine bottom-up insights from many different perspectives in a single multidisciplinary handbook,creating a rich tapestry of different perspectives. Since different application scenarios entail different requirements for control, and present at least partially different problems and context-specific approaches, we ask specialists from each of the three fields to share their perspective on five cases: (i) automated intelligent mobility, (ii) recommender systems and AI-supported deliberation, (iii) cure and care robotics, (iv) autonomous warfare and, finally, (v) emerging applications and artificial general intelligence.