The rapid development of artificial intelligence brings its own set of new ethical challenges. In her PhD research, Andreia Martinho is trying to bring the ultra-theoretical world of ethics and the practice of artificial intelligence closer together.

Technological advances make our daily lives a lot easier in many ways. Artificial intelligence (AI) enables us to communicate with devices, carry out complex processes and travel from A to B more quickly. Yet there are certain aspects of technology where man is still superior to the machine. One of these aspects is moral reasoning.

The complex domain of AI Ethics

Martinho came to TU Delft in 2018 for her PhD on ethics and AI. “In recent years artificial intelligence has been developing at a rapid pace. The increasing role of AI in society means that ethical issues are becoming more and more important in order to prevent unwanted effects. You don't want AI Systems, like cars or robots to perform actions that we as humans feel are undesirable.”

So how to ensure that? “There are two main ways of looking at this: either you incorporate morality in the system, or you create a bunch of rules and regulations to make sure these systems are under control.”

Morality in artificial intelligence

In an article that Martinho recently published, she surveyed ethicists about artificial morality. She was surprised by the richness and diversity of viewpoints which she categorised in five different groups. “Some ethicists consider that artificial morality is key to a better understanding of morality and that these systems will be better moral agents than humans, while others consider that they pose an existential threat.” She hopes that this sort of empirical approach forms a good basis for future debates.

According to Martinho we are still a long way from seeing Artificial Moral Agents (AMA), i.e. systems that display moral reasoning, will actually look like. “We are still taking baby steps. And we also need to debate how far the autonomy of AI should reach and who is ultimately responsible for the actions of such a system.”

About the incorporation of morality in these systems, Martinho explains that there are several approaches and researchers show no consensus on which one is the best approach. In one of her projects she took the moral uncertainty approach. “Such an AMA  would not so much determine what is right or wrong, as make a choice based on a certain amount of moral reasoning which would take into account several moral theories.” Martinho further explains that, even if artificial morality is not realised, this type of research allows researchers to learn a lot about morality.

Industry & academia need to work together towards ethical AI

When researching one particular AI System, the autonomous vehicle, Martinho was struck by the dominance of the trolley problem in the Ethics literature. The trolley problem involves a situation in which making victims is unavoidable. For example, where an autonomous vehicle (AV) heading towards an older woman, but if it were to change direction it would hit a mother with a baby. What choice should the vehicle make?

“I understand that it is quite fascinating to see a theoretical thought experiment suddenly coming to life, but I thought that the ethics conversations should not be reduced solely to this thought experiment.” The focus on these theoretical ideas without a technology check increases the risk for speculative ethics and at the same time makes developers less engaged in ethics.

Involving AI industry

Martinho wanted to bring industry into the conversation. “When looking at the literature I thought there was an important piece that was missing in this complex puzzle: what is the industry’s take on the ethics of AVs?” She did an industry review by looking at the reports of autonomous companies with permits in California and she concluded that the trolley problem was never mentioned in these reports.

Martinho stresses that the trolley problem is not intended to be solved by industry, but is meant to increase awareness of the possibility of such extreme situations. “Many AI manufacturers find the trolley problem so hypothetical that they don’t take it seriously.”

“There is an important balance that needs to be struck: ethicists explore extreme situations that are often stylized overly simplistic theoretical constructions in order to test moral intuitions, values etc. These theoretical constructions should not be taken at face value by industry but they should also not be dismissed. Industry needs to handle risk management in extreme traffic situations, otherwise consumers will not trust this technology.”

Role of society and policymakers

Besides industry and science, society and policymakers also have a role to play in the development of ethics in technology, continues Martinho. “For manufacturers it is important to have clear and operational frameworks and guidelines on ethics in AI.”

These frameworks are in part formed by social debate on such questions as the extent to which technology should be allowed to intervene in our daily lives. Martinho: “It is then the task of the policymakers to translate these opinions into policy and legislation. So integrating ethics into AI is a true interplay between science, industry, society and governmental bodies.”

Love of complex issues

As a researcher, Martinho calls herself a problem solver. As far as she's concerned, a problem can never be too complex. So she doesn't see the diversity of opinions within ethics and the gulf between AI and ethics as a problem, but as an enrichment. “It is my job to bring structure and coherence to this. By breaking it up into small, manageable pieces, we enable governmental bodies and companies to put it in to practice. And this is ultimately beneficial to society as a whole.”

Bridge builder

Martinho and her PhD project are part of the ERC–funded BEHAVE research programme  led by professor Caspar Chorus that aims to develop and empirically test new models of moral decision making. “It is a privilege to work in BEHAVE. The interdisciplinarity and diversity of the group makes the research work and the working environment extremely positive and interesting.” In the research group she feels she forms a link between the group members working on the theoretical morality projects and the mathematical modellers. “I do empirical research in AI Ethics so often my day is split between reading philosophical articles and trying to run something in Python, a programming language”.

Prior to coming to TU Delft, Martinho worked for a few years in the Ethics Department of a large healthcare company in the US. Rather than considering her industry time as a drawback, she took this experience into her research. “Coming from industry, I knew I was rusty on my research skills when compared to my colleagues, but instead of letting that affect me, I decided to use my skills and knowledge to this new chapter of my life”.

As a bridge builder, Martinho hopes that her PhD research will help to create more rapprochement and mutual understanding. “I hope in the future the ethics conversations about AI include both ethicists and developers. All we need now is to find common ground and help them find each other in the field.”