AiTech Agora

The ancient Greek word Agora refers to a public open space used for assemblies and markets. It captures the informal nature of our weekly meetings, a place where exchange of knowledge, ideas, and an engaging conversation takes place.

  • Due to the COVID-19 situation, we will organize our meetings virtually for the foreseeable future
  • If you wish to join our upcoming meetings, please subscribe to the AiTech Agora mailing list here, or just send an email with a subscription request to aitech@tudelft.nl. You will then start receiving weekly invitations containing the link to the virtual meeting room and the password
  • If you haven't received the invitation through our mailing list and want to join a particular meeting at short notice, please email aitech@tudelft.nl 
  • Presentation slides from our past meetings are available here. We also record some of our online meetings, the past recordings are available here

Next meeting

October 28, 13:00 – 14:00 CEST

Ibo van de Poel: Embedding values in AI systems

Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I have recently proposed an account for determining when an AI system can be said to embody certain values (see https://link-springer-com.tudelft.idm.oclc.org/article/10.1007/s11023-020-09537-4 ). I will briefly outline the account and the main assumptions and desiderata on which it is based. I will then outline specific challenges I see for embedding values in AI systems and potential design strategies, topics on which I would also very much welcome your view and input.

Upcoming meetings

October 28: Ibo van de Poel

October 28, 13:00 – 14:00 CEST

Ibo van de Poel: Embedding values in AI systems

 

Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I have recently proposed an account for determining when an AI system can be said to embody certain values (see https://link-springer-com.tudelft.idm.oclc.org/article/10.1007/s11023-020-09537-4 ). I will briefly outline the account and the main assumptions and desiderata on which it is based. I will then outline specific challenges I see for embedding values in AI systems and potential design strategies, topics on which I would also very much welcome your view and input.

 

 

November 11: Jens Kober

November 11, 13:00 – 14:00 CEST

Jens Kober: TBA