Workshop series “Issues in XAI #4": Explanatory AI: Between ethics and epistemology

23 mei 2022 08:00 t/m 25 mei 2022 17:00 - Locatie: Delft X – Building 37 | Zet in mijn agenda

This instalment focuses on the interplay between epistemic challenges of artificial intelligence and the ethical use of artificial intelligence. How can explainability support us when trying to meet the challenges of ethical use? And to what extent are there obligations for AI systems to be explainable? Presentations will explore among other things when explainability is required, how explainability relates to trust, accountability and knowledge, and will consider case studies in health care. Furthermore, four keynotes will be given by Filippo Santoni de Sio, Federica Russo, Judith Simon and Otávio Bueno.

We warmly invite you to participate in the fourth workshop in the interdisciplinary workshop series ‘Issues in explainable AI’, dedicated to exploring the various facets of research on improving our understanding of AI systems.

This instalment takes place at TU Delft from May 23-25 and focuses on the interplay between epistemic challenges of artificial intelligence and the ethical use of artificial intelligence. How can explainability support us when trying to meet the challenges of ethical use? And to what extent are there obligations for AI systems to be explainable? Presentations will explore among other things when explainability is required, how explainability relates to trust, accountability and knowledge, and will consider case studies in health care. Furthermore, four keynotes will be given by Filippo Santoni de Sio, Federica Russo, Judith Simon and Otávio Bueno. 

All interested participants are welcome, either in person or online. For more details including the programme, visit the website for the workshop: https://juanmduran.net/xai4/ 

Call for Abstracts: Workshop series “Issues in XAI #4”: “Explainable AI: between ethics and epistemology”

Faculty for Technology, Policy and Management – Delft University of Technology
23 - 25 May 2022

Submission deadline: 15th January, 2022

The workshop focuses on the normative and epistemic aspects of explainable AI (XAI). The two are especially relevant to each other in this specific case. In fact, the goal of XAI is first and foremost epistemic in providing knowledge or understanding of the inner workings of AI models. Nevertheless, there are also relevant normative questions about transparency, responsibility, and accountability that interact with XAI. In this respect, the workshop aims at a synergy between epistemological concerns with non-epistemological ones (e.g., ethical, political, economic, societal). On the one hand, the epistemic status of XAI tools can help inform their role as a solution to non-epistemological/normative questions. If current XAI tools fail to provide understanding of the inner workings of AI models, e.g. yielding only limited knowledge of the importance of input features, what role can they play for facilitating meaningful human control? To what extent can they support human agency and clarify accountability questions? Being clearer on the epistemic status of users can yield more fine-grained answers to these philosophical questions. On the other hand, the normative questions can further inform what the appropriate epistemic goals are for (not yet developed) XAI tools. If the normative questions turn out to require a specific epistemic status with respect to the model that is used, then this can support epistemological discussions on how to reach that status. How is the explanatory logic for XAI that meets the epistemic and non-epistemic standards required from it? How do normative dimensions of epistemic notions impact the epistemological debate on XAI? This range of topics on the intersection of XAI is important and yet largely underdeveloped. With this workshop we hope to bring the two parts of philosophy closer together. Whereas the workshop will not be focused on one specific topic, there is special interest in medical AI.

We invite submissions from all related academic fields, including philosophy of (computer) science, epistemology, political and moral philosophy, political theory, legal theory, and social theory. Possible questions/topics include:

  • The logic of scientific explanation for/in AI
  • The epistemic and moral goods expected from explaining AI (e.g., understanding, knowledge, moral justification)
  • Trustworthy AI: benefits and limits of Transparency, Accountability, Explainability, and Computational Reliabilism
  • Which epistemic and non-epistemic values (social, economic, political, moral, etc.) are relevant for XAI, and to what extent do explanations in AI affect non-epistemic values?
  • Are responsibility, accountability and contestability possible without XAI?
  • What forms of backward- and forward-looking responsibility are tailored to XAI and notions of trustworthiness?
  • How can forms of epistemic injustice (hermeneutical, testimonial, and otherwise) be ameliorated? 
  • This list is non-exhaustive, and submissions on related topics are welcome.

If you are interested in participating in this expert workshop, please submit an anonymized abstract of no more than 500 words, along with an email including your name, title, and affiliation to EasyChair (https://easychair.org/my/conference?conf=eexai2021). Participants will be asked to give a presentation (25 min + 20 Q&A) of their paper. All the abstracts will be invited to submit a full paper to a Special Issue, possibly in Ethics and Information Technology. 

The workshop will take place in person at Delft University of Technology. It will also be online for participants that cannot travel. 

Keynote Speakers

  • Filippo Santoni de Sio (TU Delft)
  • Federica Russo (University of Amsterdam)
  • Otavio Bueno (University of Miami)

Best student award

We will be offering a best student paper award that can be used to attend the workshop (trip + hotel) covering up to 500 €. Students interested must first submit their abstract and, upon acceptance, they will be requested to submit their paper for blind review (ca. 5,000 words) by the 15th of March. The best student paper award will be announced during the workshop. Students are also encouraged to submit a poster for their exhibition. A round of discussion around the posters has been planned.

Organization

The workshop is sponsored by the European Research Infrastructure SoBigData++, the European Network of Human-Centered Artificial Intelligence Humane-AI, KNAW Wetenschapsfondsen Evert Willem Beth Foundation and the TU Delft TPM-AI Lab. This workshop is part of the workshop series “Issues in Explainable AI”.

Key dates

  • Abstracts submission deadline: 15th January, 2022
  • Notification sent to participants: 1st March, 2022
  • Submission paper best student award: 15th of March, 2022
  • Workshops: 23, 24, 25th May 2022

If you have any questions regarding the workshop, please contact any of the organizers:

Juan M. Durán: j.m.duran@tudelft.nl
Stefan Buijsman s.n.r.buijsman@tudelft.nl
Giorgia Pozzi g.pozzi@tudelft.nl
Jonne Maas j.j.c.maas@tudelft.nl