Responsible Intelligent agents & Intelligent human/agent-agent interaction
Our mission is to understand and engineer collaboration between humans & agents. We develop theories, models and methods of interactive intelligence by combining methods from AI and the social sciences.
About the group
The Interactive Intelligence (II) section focusses on socially interactive, intelligent agents. We research the intelligence that underlies and co-evolves during the repeated interactions of human and technology “agents” who cooperate to achieve a joint goal. Our research program aims for synergy and social interaction between humans and technology, to empower humans in their social context. The new technological challenges we face arise from the need to integrate Artificial Intelligence, Cognitive Engineering, and behavioural sciences. In particular, the technological challenge is to develop socially aware agents that in interaction with humans co-adapt and co-learn over time. Social awareness implies context-awareness with the knowledge to interpret the physical situation in social terms and the knowledge to behave in a distinctive individual way that is personalized towards those the agent interacts with. In this manner, we endeavour to develop the interactive agent technology that empowers humans and groups of humans to deal with the societal and individual challenges such as the increasing need for sustained self-management for healthy ageing, safety and life-long education.
Behaviour modelling, Mental models/ToM, Responsible AI/EXAI, Intelligent agents, Multi-modal perception systems, Knowledge representation x ML, Socio-cognitive engineering, Humane/Hybrid AI, Long-term interaction, Situated awareness.
We are part of the Department of Intelligent Systems.
We are participating in:
04 October 2022
Drivers of partially automated vehicles are blamed for crashes that they cannot reasonably avoid
People seem to hold the human driver to be primarily responsible when their partially automated vehicle crashes. But is this reasonable? Researchers Niek Beckers, Luciano Cavalcante Siebert, Merijn Bruijnes, Catholijn Jonker & David Abbink from the AiTech initiative investigated the apparent mismatch between the public’s attribution of blame and finding from the human factors literature regarding human’s ability to remain vigilant in partially automated driving.