Today’s engineers create systems that are ever more equipped with artificial intelligent technologies. Autonomous behavior of cars, robots, and decision support algorithms is becoming a reality. Our vision is that scientists should not only research the technology that makes intelligent autonomy possible, but also act upon the responsibility to ensure that design, engineering, and use of such systems embrace human values and meaningful human control.
Meaningful human control is particularly important in cases of failures or conflicts with the normative foundations of society, social conventions, and human acceptability. We believe these challenges demand a multidisciplinary effort, bringing together researchers across a wide range of fields. Our aim is to provide answers to ‘how to’ build autonomous intelligent systems that collaborate with humans towards societal and economic prosperity and the sustainable development of our planet.
- Understand the implications of meaningful human control for the science, design, and engineering of autonomous intelligent systems
- Build, test, break, and learn from systems under meaningful human control in practice
- Develop educational programs on the use of meaningful human control in autonomous intelligent systems