Output

2022

  • Siebert, L. C., Lupetti, M. L., Aizenberg, E., Beckers, N., Zgonnikov, A., Veluwenkamp, H., Abbink, D., Giaccardi, E., Houben, G., Jonker, C. M., Van den Hoven, J., Forster, D., Lagendijk, R. L. (2022). Meaningful human control: actionable properties for AI system development. AI and Ethics.
    Bridging the gap between philosophical theory and design & engineering practice by identifying four actionable properties for human-AI systems under meaningful human control.

2021

  • Peschl, M., Zgonnikov, A., Oliehoek, F. A., & Siebert, L. C. (2021). MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning. arXiv preprint arXiv:2201.00012. (accepted for AAMAS 2022)
     
  • Liscio, E., van der Meer, M., Siebert, L. C., Jonker, C. M., Mouter, N., & Murukannaiah, P. K. (2021). Axies: Identifying and Evaluating Context-Specific Values. In Proceedings of the 20th International Conference on Autonomous Agents and Multi Agent Systems (AAMAS), pp. 799-808.
    Identifying context-specific values supports the engineering of intelligent agents that can elicit human values and take value-aligned actions. The proposed hybrid (human and AI) methodology simplifies the abstract task of value identification as a guided value annotation process by human annotators supported by natural language processing.
     
  • Siebert L.C., Mercuur R., Dignum V., van den Hoven J., Jonker C. (2021) Improving Confidence in the Estimation of Values and Norms. In: Aler Tubella A., Cranefield S., Frantz C., Meneguzzi F., Vasconcelos W. (eds) Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XIII. COIN 2017, COINE 2020. Lecture Notes in Computer Science, vol 12298. Springer, Cham. doi.org/10.1007/978-3-030-72376-7_6
    ​​​​​​​This work proposes a method to estimate a person’s relative preferences to a given set of values in the context of the ultimatum game. Further, it proposed two methods to reduce ambiguity in profiling the value preferences of these agents via interaction.

2020

  • Aizenberg, E., & Van den Hoven, J. (2020). Designing for Human Rights in AI. Big Data & Society 7(2).
    Bridging socio-technical gaps by proactively engaging societal stakeholders to translate values embodied by human rights into context-dependent design choices through a structured, inclusive, and transparent process.
     
  • Van der Waa, J., Van Diggelen J., Siebert, L. C., Neerincx, M., Jonker, C. (2020) Allocation of Moral Decision-Making in Human-Agent Teams: A Pattern ApproachProceedings of the Interational Conference on Human-Computer Interaction 2020 (HCII 2020). In: Harris D., Li WC. (eds) Engineering Psychology and Cognitive Ergonomics. Cognition and Design. Lecture Notes in Computer Science, vol 12187. Springer, Cham. 
    In the foreseeable future practice, agents must collaborate with humans to ensure that humans always remain in control, and thus responsible over morally sensitive decisions. This work utilizes the concept of team design patterns to allocate moral decisions as a team task (agent and human), depending on properties such as the moral sensitivity, available information and time criticality.
     
  • Wang, X., Siebert, L. C., & Van den Hoven, J. (2020). Autonomous Shipping Systems: Designing for Safety, Control and Responsibility (extended abstract).  Proceedings of the 18th International Conference on Ethical and Social Impacts of ICT (ETHICOMP), p. 187.
    This work discusses Value-Sensitive Design as an approach to design autonomous shipping systems in a manner that the systems can still be responsive to human operator's intentions and values.
     
  • Boyce, W.P., Lindsay, A., Zgonnikov, A., Rañó, I. & Wong-Lin, K.(2020)Optimality and limitations of audio-visual integration for cognitive systemsFrontiers in Robotics and AI. 7:94. doi: 10.3389/frobt.2020.00094 
     
  • Zgonnikov, A., Abbink, D., & Markkula, G. (2020). Should I stay or should I go? Evidence accumulation drives decision making in human drivers. PsyArxiv preprint.
     
  • Melman, T., Beckers, N., & Abbink, D. (2020). Mitigating undesirable emergent behavior arising between driver and semi-automated vehicle. arXiv preprint arXiv:2006.16572, accepted as a spotlight talk at the RSS 2020 Workshop on Emergent Behaviors in Human-Robot Systems.

Ongoing

  • MSc Project EEMCS student Shreyan Biswas on XAI, with Jie Yang, Stefan Buijsman
  • MSc project IDE on opening up space for self-representation in the job application process - Evgeni, Lianne Simonse
  • MSc project Cognitive Robotics on designing for co-adaptation in physical human-robot collaboration - Niek, David

 

2021

2020

  • Supervision of BSc seminar group on ‘Socio-technical approaches to responsible and inclusive AI design’ @EEMCS - Evgeni, Inald

  • Supervision of MSc project on “Empowering Academic Graduate Job Search” by Jeroen ter Haar Romenij @IDE - Elisa, Evgeni, Alessandro Bozzon
  • Supervision of internship (student from  IMT Mines Ales, France) on “Ethical decision making for autonomous systems considering moral uncertainty” @EEMCS - Luciano, Catholijn
  • MSc student ‘Impact of control authority on the drivers’ perceived responsibility:a haptic shared control driving study’ @Cognitive Robotics, Niek, David

2019

  • Supervision of BSc seminar group on ‘Explaining and fooling deep learning models’ @EEMCS - Evgeni, Inald
  • Supervision of a group of 5 BSc students working on a literature survey on the “Ethical implications of the use of machine learning models in autonomous vehicles” - @EEMCS - Luciano

Public Outreach

 

 

2019

  • Beta Balie Cinema. Discussion session after screening of the movie HER - Luciano, Arkady.
  • Beta Balie Cinema. Discussion session after screening of documentary “More human than human” - Luciano.

2021