Robots and society

Robots and AI do something more than what they were designed to do. The sociotechnical systems perspective can help to explore the ethical and governance dimensions in what seems like only technological concerns, avoiding artificial gaps between them on a path to responsible and comprehensive engineering.

Dr. Olya Kudina

Course Contents

‘Robots and Society’ takes a critical look at the design, development, production, and implementation of robots and AI used in society. The class will build upon traditional ‘robot ethics’ and ‘ethics of technology’ concepts and methods to approach robots and AI not as neutral technologies but as complex socio-technical systems, where the technological, ethical and governance dimensions actively interact. How do facial recognition and predictive policing help to shape our understanding of fairness and justice? Is it possible to still make a free choice in the age of AI recommender systems? Do robots have rights? Can we ever replace people with robots, and if it is possible - is it desirable? To navigate these and other questions, the students will learn to identify the impact of robotic and AI technologies on human practices and critically approach their ethical dimension. The course serves as a first step in uncovering societal issues related to robotics and AI, understanding how and why they are points of concern, and applying conceptual tools to mitigate and/or overcome such issues.

The course will provide students with the skills of critical reflection, ethical sensitivity, and deliberation about the ethical, societal, design, and governance issues at stake concerning these technologies. Thus, the course should prepare students for common problems they will face in their current studies and prepare them for their future careers in the technical or humanities disciplines. Given the broad background of the students who will participate in this course, abstract formulations will be avoided. Moreover, many of the topics will be presented as introductions, creating awareness among the students.

Detailed content:

Introduction to ethics of technology
• Technology as a socio-technical system
• Practice-based approach to values
• Technological mediation of morality
• Technology as a Social Experiment
• Value Change

Introduction to robot ethics
• Ethics of Human-Robot Interaction (HRI) (e.g. social script, anthropomorphism, gender, inclusivity, etc.)
• Ethics of Robot Applications (e.g. healthcare robots, Digital Voice Assistants, military drones, etc.)
• Ethics of Robot Design and Development (e.g. Design for Values, Value hierarchy, Ethics as an accompaniment model, etc.)

Introduction to ethics of AI
• AI as a Black box
• Bias in training data
• Principle of explainability
• Power and politics of AI
• Ethics of AI in public sectors (e.g. predictive policing, facial recognition, Covid19 tracking apps, etc.)

Study Goals

Regarding ethics of technology, students:
• are aware of the range of views on the relationship between technology development and society, from technology being neutral to technology being value-laden to the technology as a socio-technical system,
• are capable of detecting when certain assumptions are appealed to in a debate about the ethics of technology,
• have awareness for the role of these concepts and approaches to thinking in the current debate surrounding robotics and AI,
• are capable of formulating a position on the ethical issues at stake in current applications of robotics and AI.

Regarding robot ethics, students
• have a broad understanding of the various canonical forms of debate within the field,
• can show the strength of an argument made in the field,
• are aware of multiple viewpoints in the field,
• are aware of how these questions apply to their current work.

Regarding ethics of AI, students
• have a broad understanding of the various canonical forms of debate within the field,
• can identify salient ethical features of specific AI-based technologies,
• are aware of multiple viewpoints in the field of ethics and in the debates on regulation,
• are aware of how these questions apply to their current work.

Suggested literature

• Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers.

• Asaro, P. M. (2006). What should we want from a robot ethic. International Review of Information Ethics, 6(12), 9-16.

• Hagendorff, T. (2020). The ethics of AI Ethics: An evaluation of guidelines. Minds & Machines 30, 99–120

• Van de Poel, I. (2013). Why new technologies should be conceived as social experiments. Ethics, Policy & Environment, 16(3), 352-355. February 2000


• Kudina, O., & Verbeek, P. P. (2019). Ethics from within: Google Glass, the Collingridge dilemma, and the mediated value of privacy. Science, Technology, & Human Values, 44(2), 291-314.

• van Wynsberghe, A., & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Science and engineering ethics, 25(3), 719-735.

Technological Mediation of Morality - a talk by Olya Kudina at TEDxBucharest

Enrolling for this course

Interested in enrolling for this course? Head on to Brightspace and look for the course code RO47008. Also do not forget to register for the exam in time.