The Digital Ethics Centre works with various organizations on research to operationalize digital ethics in practice, including the following projects:
Interested in working with us as well?
AI ethics in Healthcare
Staff shortages and the constant desire to provide high-quality medical care. These are only two of the most important reasons for a sharp increase in the application of artificial intelligence (AI) in healthcare in the coming years. By launching the first healthcare AI Ethics Lab, Erasmus MC and TU Delft put the focus on ethically responsible and clinically relevant AI that will positively impact both patient care and healthcare workers.
Ethics of Digital Experiments
Digital technologies are frequently tested in sandboxes, living labs and field labs. While this limits their impact on society during testing, ethical guidelines are still needed. What values are important and what norms hold for digital experiments? How should field labs be governed? PhD student Joost Mollen conducts a four-year study into these questions, in collaboration with the Dutch province of Zuid-Holland.
Design for Responsibility in E-Gov AI ecosystems
The usage of autonomous AI systems in (e-)government raises important questions about the distribution of responsibility. When an AI system is used to advice decision making, how does this affect who is responsible? What if users of the system fail to meet the requirements to be responsible? PhD student Antonia Sattlegger from the TU Delft Digicampus works on this questions on the intersection of philosophy and public administration. This project is a collaboration with ICTU, an independent IT consultant and executor within the Dutch government.
Data & AI Ethics in the Social Security Sector
To administer social benefits UWV (the Dutch (Employee Insurance Agency) uses more and more data-driven analyses. The digital tools developed for this are based on sensitive personal data and their outcomes can have serious consequences for citizens already in a vulnerable position. As such, it is important that these digital technologies respect central values of fairness, non-discrimination, privacy and human dignity. Eveline Jacobson and a new PhD student research the ethical use of digital technology in the social security sector.
Meaningful Human Control & Autonomous Weapon Systems
Artificial Intelligence is increasingly used by the Dutch Ministry of Defence, both in weapon systems and in non-lethal technologies. How can meaningful control be maintained over these autonomous systems? What other values need to be guaranteed for responsible usage of AI by the Ministry of Defence? A new PhD student at the Delft Digital Ethics Centre will engage with these issues.