Manipulation and the ethics of generative AI

Nieuws - 24 februari 2024 - Webredactie

When Cambridge Analytics used Facebook data for targeting purposes for political campaigns, manipulation on social media was all over the news. However, there was hardly any philosophical discussion yet. Dr. Michael Klenk wanted to understand the concept of manipulation, realizing it became more and more urgent to do something about it. 

Dr. Michael Klenk focusses his work on the ethics of influence and the concept of manipulation. His goal is to understand positive influence and distinguish it from negative influence, crucial in the design of new (automated) technology and for our social well-being. He recently published an article on the ethics of generative AI and manipulation: Ethics of generative AI and manipulation: a design-oriented research agenda | Ethics and Information Technology (springer.com)

Interview

Authors: Beatriz Lafuente Alcazar & Helma Dokkum

 

What led you to research the concept of manipulation?

In 2019 I got a fellowship (personal research grant) to study the ethics of social media. I wrote the proposal after the Cambridge analytical scandal. Cambridge Analytics used Facebook data for targeting purposes for political campaigns. Manipulation on social media was all over the news, but there was hardly any philosophical discussion yet. I also felt personally affected since I felt the pull of social media , sometimes getting distracted from work, but I did not simply want to jump on the bandwagon and blame social media for manipulation. I wanted – above all – to understand what makes something manipulation. But I realized that there was not a clear description of the concept of manipulation, and it also became more and more urgent to do something about it. With my fellowship, I went to Stanford and then to St Gallen to research this. It is now the main focus of my research and still find it endlessly fascinating. Manipulation is all around us and it is an urgent topic looking at the speed of technology development nowadays. Addressing issues around the nature and ethics of manipulation fits perfectly with the Delft Digital Ethics Centre (DDEC) philosophical approach of technology, and TU Delft’s wider ambitions. 

 

Manipulation sounds bad, but is it really so bad?

It is definitely a morally dubious form of social influence; it should always raise moral concerns. It can be justified by its positive effects. Perhaps I need to manipulate you for you to do something good for you or our society, but it’s never morally neutral. Ideally, I would not need to manipulate you to do good. 

 

Could you give an example?

Let’s look at a health application in which someone is manipulated to make good choices, such as sticking to their exercise plan or eating healthier. When using manipulative influence, we should critically look at the need for it, because preferably we reach the desired effects without manipulative means. We can also look at nudging – which is a form of manipulation. At a cafeteria, for example, people tend to pick food that is closer to them and easily accessible. The idea is that you use that knowledge and put all the healthy food closer to the people, e.g. at the entrance of the cafeteria, to induce people to eat healthier. In a sense it’s kind of an irrational choice because the location of the food should not really matter in your food choice. This may make this a form of manipulation but, at the same time, it may be justifiable because it is good for people. Manipulation is often very effective. If you look at this situation from a design perspective, we should look at the effectiveness of the influence AND the ethical questions and concerns.

 

To what extent does your research also influence you own life? 

I believe I have become more sensitive to seeing what I think is manipulative influence. When looking for example at User Experience design: one of the most influential and popular books outlines a way to create ‘habit forming products’ which is, in effect, a way to create addictive products. The book is used widely and enthusiastically while essentially being a manual for manipulative design. I have become very sceptical realising the book does not address the ethical questions with regards to product design that go beyond the effectiveness of an influence. 

 

Sounds like there is work to be done. Do you have specific plans to take this further into the public debate?

Last year I contributed to a report of the European Commission on meaningful and ethical communication. The EU wants to understand the boundaries of communicating in ethical ways to its citizens. My contribution was on the manipulation boundaries, and where it becomes a problem. Furthermore, I am working on a blog series, and I am working as an advisor on algorithmic systems used by the municipality of Rotterdam.

I believe it is important that we, on the one hand, do conceptual research - and there are still avenues to explore – and on the other hand we should start working on regulation and educating designers and engineers. However, if we only work on regulations and guidelines without thinking about the philosophical foundations, many questions remain unanswered. 

 

And how does the publication help this research area and the debate?

I do not have all the answers for all questions on manipulation, but at least my recent publication gives some of the answers and outlines the important questions regarding the manipulative potential of Generative AI. Research on manipulation will be crucial in helping us to answer how to deal with generative AI. With the manipulation lens, we can understand where the limits are. 

 

Manipulation has always been there. Could you further elaborate on these research questions and how they are different from earlier work on manipulation?

Technology brings our attention to phenomena that were always there, but now these factors are magnified. Generative AI increases the potential for successful manipulation on a massive scale, while we earlier had mostly manipulation in one-to-one interaction. Some of the questions are old, conceptual questions but some of the questions for the design of the technology will be very specific. I used the Design for Values methodology to address the questions (conceptual, empirical, technical) relevant for design. I am exploring the interactions between these three types of questions, by iterating through the types of questions and design stages. In the article, I mostly address conceptual questions, since we are really at the beginning of understanding manipulation, but my aspiration is to cover also the design-specific questions. 

 

Could you give an example of these interactions?

We are doing a project with the Erasmus University in which we are building an AI assistant for young couples that decide to have a child, which focuses on lifestyle interventions such as healthy eating and exercise. I am supervising a PhD student that is also part of the project. The aim is recommending healthy lifestyle choices. For that, we are interviewing the target group on their view on manipulation. We are doing empirical investigation that is input for the design, but also for better understanding the concept itself. I hope the results will benefit my philosophical work in the end as well. 

 

How do you unify the view on manipulation of different stakeholders when designing for values?

That is a fundamental challenge for non-manipulation, and for designing for values in general. In designing for values, we believe that people’s perspective is important, but there is a limit. What do we do if people don’t see the problems of manipulation? What if we manipulate people into thinking that manipulation is not a problem? Those are open questions, but it seems clear that we should also be backed up by philosophical theories (expertise input). These theories provide insight into how to approach the problem and how to balance the approaches. 

 

Now that you have a clear research agenda - what is next

We are starting a small research project in which we will deliver mental health support for people with a low socioeconomic status using generative AI. Together with a team of researchers within convergence Healthy Start we will be working on an ethically aligned lifestyle AI assistant. A crucial focus for us is navigating the power of Generative AI while steering clear of manipulation. 

Another thing I am working is writing a book on the ethical concept of manipulation. I hope to have the manuscript by this year, and I also hope to have a paper where I answer the questions I posed in this article. I also want to connect more with psychology to answer some of the empirical questions. And lastly, I am planning on offering a summer school via the OZSW here in Delft on manipulation with an interesting lineup of (international) speakers. 

 

About Michael Klenk

 

Dr. Michael Klenk is an assistant professor in ethics and the philosophy of technology at TU Delft. His research covers foundational topics about the nature of morality, moral change, and moral knowledge. He takes an interdisciplinary approach to these questions and uses resources from metaethics, epistemology, anthropology, and moral psychology in pursuit of answers. He now focusses on the ethics of influence and the concept of manipulation. His goal is to understand positive influence and distinguish it from negative influence, crucial in the design of new (automated) technology and for our social well-being. 

Before coming to Delft, he completed his PhD at Utrecht University. Before coming to Utrecht, he worked as a management consultant in Munich. Please have a look at his website for a full list of publications and presentations.

His article was recently published: Ethics of generative AI and manipulation: a design-oriented research agenda | Ethics and Information Technology (springer.com)