Collectives and AI systems: Agency and responsibility (workshop)

03 november 2022 09:00 t/m 18:00 - Locatie: AI Lab | Zet in mijn agenda

In this hybrid workshop, we explore the parallel between two cases of non-human agents that make an important difference to the world: collectives and AI systems. Are humans always responsible for the actions of these entities, or could these entities be held responsible themselves? Can these entities be intentional and moral agents?

This hybrid workshop (online and offline), organized by the TU Delft Digital Ethics Centre, has two keynote talks:

Technological and collective responsibility gaps

Hein Duijf (LMU Munich)

Do responsibility gaps exist? That is, are there situations where a harmful outcome obtains yet no individual can be held responsible for it? The widespread integration of machine learning algorithms, the additive effect of human actions on climate change, and the difficulty of assigning blame in organizations emphasize that the allocation of responsibility may be hard or impossible. In this talk, I will explore the analogy between technological and collective responsibility gaps. I will present a diagnosis of the problem and discuss some viable responses to the problem of responsibility gaps.

Group Agency and Artificial Intelligence

Christian List (LMU Munich)

The aim of this exploratory paper is to discuss a sometimes recognized but still under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical