Transparent & Traceable AI in Human-AI Teamwork
The AI*MAN Lab is studying and trying to improve many aspects of human-AI teamwork. Consider, for example, a search-and-rescue team where autonomous drones and human rescuers collaborate to map unknown search area, localise victims, and share search-and-rescue tasks efficiently. Alternatively, consider a social robot that can interact creatively and in a human-like way, assisting and helping people with conditions such as autism or dementia. Such a robot would be able to understand what a human might think or feel, and explain itself to a user in return.
Experience shows that excellent performance improvements are possible in industry, education, healthcare and many more applications when AI and humans work together, exploiting both human instincts for effective decisions in unknown situations and fast and logical AI decisions.In a human-AI team, decisions made by either the human or the AI agent may seem illogical to the other party if only individual goals are considered, so a mutual understanding of each other’s decision-making processes and actions is critical in pursuit of common team goal(s).
Part of our research therefore involves modelling the way humans think and building logical models that will help the AI agents to understand their human teammates. We develop AI agents able to take decisions that are good for the entire team and make these decisions transparent to humans.
The AI*MAN Lab is part of the TU Delft AI Labs programme.
- MDP-Based Control of Socially Assistive Robots (AE)
Ongoing Master Projects
- Mathematical Modelling of Theory of Mind (AE)
- Optimization-based control of search-and-rescue robots (AE)
- Explainable AI and Human-AI Teaming (EEMCS)
- Meaningful human control in human-agent teamwork (EEMCS)