Uncertainty as a driving force to meaningful human control over autonomous systems

Autonomous systems must adapt their behavior and operations in accordance with particular circumstances. With increasing developments in machine learning, sensors, and connectivity technologies, applications for autonomous systems broadens and its implication creates increasingly concerns in our society. We need to assure that such systems remain under meaningful human control, i.e. that humans, not computers and their algorithms, are ultimately in control of and thus morally responsible for relevant actions.

Human-machine interactions and agreement to societal and ethical norms may be limited by a poor understanding and modeling of human (moral) reasoning processes and ethics. The incorporation of human-centric moral reasoning and the possibility to handle uncertainty on a (meta)reasoning level into autonomous systems could counterbalance unforeseen and undesirable potential shortcomings of Artificial Intelligence (AI) and robotics.

In this project, we aim to incorporate uncertainty (e.g. normative uncertainty, context-dependency, and bounded rationality) into autonomous systems to achieve meaningful human control. Built on top of an extensive body of knowledge of moral responsibility and ethical theories, we will propose methods and metrics to support the design and engineering of accountable and trustworthy autonomous systems with some degree of “conscience”. We will apply state of the art machine learning and agent-based simulation, using features inspired by human cognition and sociological processes.

Master Graduation project: Moral Uncertainty for Autonomous Systems. Looking for candidates!

More information here.