Quantifying the user’s trust in intelligent systems

Humans and robots potentially make excellent teams: robots can augment or compensate for the limited human abilities, whereas humans are great at adapting to unknown situations. We, therefore, envision a future in which robots will not replace us, but where we work in close collaboration with them. My goal is to investigate, through computational modelling, how we can share control between physically-interacting humans and robots based on their mutual trust in each other’s capabilities. How does the physical collaboration change when one of the partners becomes unreliable or becomes less confident in its own actions, for example due to system failure or operating in unknown circumstances, and how can we shift control between human and robot accordingly?