- Siebert, L. C., Lupetti, M. L., Aizenberg, E., Beckers, N., Zgonnikov, A., Veluwenkamp, H., Abbink, D., Giaccardi, E., Houben, G., Jonker, C. M., Van den Hoven, J., Forster, D., Lagendijk, R. L. (2022). Meaningful human control: actionable properties for AI system development. AI and Ethics.
Bridging the gap between philosophical theory and design & engineering practice by identifying four actionable properties for human-AI systems under meaningful human control.
- Peschl, M., Zgonnikov, A., Oliehoek, F. A., & Siebert, L. C. (2021). MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning. arXiv preprint arXiv:2201.00012. (accepted for AAMAS 2022)
- Liscio, E., van der Meer, M., Siebert, L. C., Jonker, C. M., Mouter, N., & Murukannaiah, P. K. (2021). Axies: Identifying and Evaluating Context-Specific Values. In Proceedings of the 20th International Conference on Autonomous Agents and Multi Agent Systems (AAMAS), pp. 799-808.
Identifying context-specific values supports the engineering of intelligent agents that can elicit human values and take value-aligned actions. The proposed hybrid (human and AI) methodology simplifies the abstract task of value identification as a guided value annotation process by human annotators supported by natural language processing.
- Siebert L.C., Mercuur R., Dignum V., van den Hoven J., Jonker C. (2021) Improving Confidence in the Estimation of Values and Norms. In: Aler Tubella A., Cranefield S., Frantz C., Meneguzzi F., Vasconcelos W. (eds) Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XIII. COIN 2017, COINE 2020. Lecture Notes in Computer Science, vol 12298. Springer, Cham. doi.org/10.1007/978-3-030-72376-7_6
This work proposes a method to estimate a person’s relative preferences to a given set of values in the context of the ultimatum game. Further, it proposed two methods to reduce ambiguity in profiling the value preferences of these agents via interaction.
- Aizenberg, E., & Van den Hoven, J. (2020). Designing for Human Rights in AI. Big Data & Society 7(2).
Bridging socio-technical gaps by proactively engaging societal stakeholders to translate values embodied by human rights into context-dependent design choices through a structured, inclusive, and transparent process.
- Van der Waa, J., Van Diggelen J., Siebert, L. C., Neerincx, M., Jonker, C. (2020) Allocation of Moral Decision-Making in Human-Agent Teams: A Pattern Approach. Proceedings of the Interational Conference on Human-Computer Interaction 2020 (HCII 2020). In: Harris D., Li WC. (eds) Engineering Psychology and Cognitive Ergonomics. Cognition and Design. Lecture Notes in Computer Science, vol 12187. Springer, Cham.
In the foreseeable future practice, agents must collaborate with humans to ensure that humans always remain in control, and thus responsible over morally sensitive decisions. This work utilizes the concept of team design patterns to allocate moral decisions as a team task (agent and human), depending on properties such as the moral sensitivity, available information and time criticality.
- Wang, X., Siebert, L. C., & Van den Hoven, J. (2020). Autonomous Shipping Systems: Designing for Safety, Control and Responsibility (extended abstract). Proceedings of the 18th International Conference on Ethical and Social Impacts of ICT (ETHICOMP), p. 187.
This work discusses Value-Sensitive Design as an approach to design autonomous shipping systems in a manner that the systems can still be responsive to human operator's intentions and values.
- Boyce, W.P., Lindsay, A., Zgonnikov, A., Rañó, I. & Wong-Lin, K.(2020). Optimality and limitations of audio-visual integration for cognitive systems. Frontiers in Robotics and AI. 7:94. doi: 10.3389/frobt.2020.00094
- Zgonnikov, A., Abbink, D., & Markkula, G. (2020). Should I stay or should I go? Evidence accumulation drives decision making in human drivers. PsyArxiv preprint.
- Melman, T., Beckers, N., & Abbink, D. (2020). Mitigating undesirable emergent behavior arising between driver and semi-automated vehicle. arXiv preprint arXiv:2006.16572, accepted as a spotlight talk at the RSS 2020 Workshop on Emergent Behaviors in Human-Robot Systems.
- MSc Project EEMCS student Shreyan Biswas on XAI, with Jie Yang, Stefan Buijsman
- MSc project IDE on opening up space for self-representation in the job application process - Evgeni, Lianne Simonse
- MSc project Cognitive Robotics on designing for co-adaptation in physical human-robot collaboration - Niek, David
- Supervision of BSc research project group on ‘Bridging socio-technical gaps for responsible and inclusive AI design’ @EEMCS - Evgeni, Inald
- Co-supervision of MSc project on “The Meaning in Hiring: The potential loss of self-representation in AI hiring video interview systems” by Derek van der Ploeg @IDE - Niya Stoimenova, Lianne Simonse, Evgeni
- MSc Project 3mE student Klaas Koerten “Dissipating phantom traffic jams with haptic shared control”, supervised by Arkady Zgonnikov and David Abbink
- MSc Project EEMCS Markus Peschl on “Aligning AI with Human Norms Multi-Objective Deep Reinforcement Learning with Active Preference Elicitation” - Luciano, Arkady, Frans Oliehoek
- Coach to the BSc Software project “Please open this AI black-box! Creating interfaces to make the inner workings of machine learning algorithms clearer” - Luciano
Supervision of BSc seminar group on ‘Socio-technical approaches to responsible and inclusive AI design’ @EEMCS - Evgeni, Inald
- Supervision of MSc project on “Empowering Academic Graduate Job Search” by Jeroen ter Haar Romenij @IDE - Elisa, Evgeni, Alessandro Bozzon
- Supervision of internship (student from IMT Mines Ales, France) on “Ethical decision making for autonomous systems considering moral uncertainty” @EEMCS - Luciano, Catholijn
- MSc student ‘Impact of control authority on the drivers’ perceived responsibility:a haptic shared control driving study’ @Cognitive Robotics, Niek, David
- Supervision of BSc seminar group on ‘Explaining and fooling deep learning models’ @EEMCS - Evgeni, Inald
- Supervision of a group of 5 BSc students working on a literature survey on the “Ethical implications of the use of machine learning models in autonomous vehicles” - @EEMCS - Luciano
- Beta Balie Cinema. Discussion session after screening of the movie HER - Luciano, Arkady.
- Beta Balie Cinema. Discussion session after screening of documentary “More human than human” - Luciano.
- Monthly 3-page columns on philosophy and AI for Filosofie Magazine, the first two are public right now: www.filosofie.nl/het-nieuwe-gezicht-van-ai/, www.filosofie.nl/de-computer-zegt-nee/ and with a bit of a delay new ones are released every month (and shared on social media)- Stefan
- NWA Facebook Live session on AI & Control, Sep 15 www.facebook.com/nationalewetenschapsagenda/videos/407345294241309 - Stefan
- Knack Magazine interview on Responsible AI, www.knack.be/nieuws/belgie/wiskundefilosoof-stefan-buijsman-auto-s-zullen-pas-over-20-a-30-jaar-zelfstandig-rijden/article-longread-1780793.html - Stefan
- Studium Generale Delft, Art & Tech, Sep 26- Stefan
- TiU AI Forward Forum, Live session on Superintelligence and limitations of current AI, Sep 30- Stefan
- Keynote Het Zoekend Hert, philosophical society in Belgium, on Philosophy and AI, Oct 3- Stefan
- BNR Nieuwsradio, expert panel on facial recognition, www.bnr.nl/podcast/breekt/10456260/breekt-er-is-geen-houden-aan-gezichtsherkenning-wordt-uiteindelijk-de-norm - Stefan
- Brainpool, Swedish non-profit for teenagers, talk on AI and philosophy, October 23 - Stefan
- Talk on “Responsible and Ethical AI” to the X TU Delft Series on Ethics of Technology, May 27. - Luciano
- Invited talk at the Symposium of Accountability, Responsibility, and Transparency (A.R.T) in AI on “How to keep AI under meaningful human control?” - Luciano
- Presentation at the Converge AI, Energy and Sustainability think tank “Can AI support customer empowerment in active distribution grids?” - Luciano