Developing and maintaining values in a digital society

News - 12 October 2022 - Webredactie Communication

How can we ensure that the fundamental values which we consider important in the Netherlands and Europe, such as privacy, transparency and democracy, are safeguarded in a digital society? How can we make a truly positive contribution to society using AI? A conversation between Geert-Jan Houben, pro-vice rector AI, Data and Digitalisation and leader of the AI Initiative, and Professor Jeroen van den Hoven (TPM), leader of the Digital Ethics Centre. "For twenty years already we in Delft have been taking the lead in the field of combining ethics and engineering. "You want that your innovation contributes to the type of society we want to live in."

Jeroen van den Hoven leads the TU Delft Digital Ethics Centre and chairs committees such as UWV's Data Ethics Committee, and the cooperating banks applying AI to combat money laundering. Van den Hoven: "It's important to emphasise that those committees are multidisciplinary in composition. We always work together with data scientists, as well as lawyers and domain experts." Geert-Jan Houben adds: "It's not about 'solutions' either. We don't have a ready-made answer. The idea is that everybody thinks about things together and that the scientist creates a link to practical situations. In the case of AI technology the assumption is sometimes that it's a technology you can simply deliver to society in a box as it were. Ready-made. But creating AI is not one-way traffic. You can only get a better insight into what does and doesn't work if you test it and, in that sense, get your hands dirty."

They do not deny that this way of working also involves risks. Van den Hoven: “So you want to set up a process so that if something does go wrong at some point you can prove that you didn't start thinking about it just the other day, but that you took all the relevant considerations and legislation into account. And that you can make adjustments in time." Houben: “That's precisely the characteristic of the responsible engineer that we want to educate here in Delft. All students at Bachelor, Master or PhD level are therefore given real-life challenges to deal with. Delivering a suitable solution in the field of AI not only means creating a piece of technology that works, but also organising an environment in such a way that makes the solution you deliver suitable."

The possibilities with AI are enormous. The huge amount of data we obtain through smart devices can lead to improvements in healthcare and better treatments. The energy transition is pretty much only possible with the help of AI (think of the ESP lab or the Wind AI Lab). But there is a dark side: think filter bubbles, Big Tech or what Silicon Valley is doing with data. What do see as the most important task? 

Jeroen van den Hoven: "There's absolutely no doubt that we're now entering a digital age. It's also being called a period of digital reconstruction, with everything being overhauled, automated and digitised.
We have to make sure that the values that we consider important in the Netherlands, such as privacy, democracy, accountability and transparency keep their place in this new digital society in a way that is satisfactory. We certainly don't want everyone getting involved with AI. Like in Silicon Valley where there's not that much interest in privacy or democracy. And you can hardly blame them, because they have companies to run and they need to keep an eye on value for their shareholders, except if politicians decide otherwise."

GJH: "Developing and maintaining that dignity in digital technology, in the Netherlands and in Europe, is something we scientists want and need to make a definite contribution to. We need to make sure that we enable our own country and our own continent to make choices responsibly, without being dictated to by suppliers that have different values, or shareholders."

JvdH: "It is precisely this issue that is currently top of mind in Brussels. The crisis in Ukraine and the resulting energy crisis have made people realise that our European values, such as democracy, privacy and human rights, do not apply outside of Europe in the same way as here. If we want to protect our way of life and the European approach to society in the 21st century, we must therefore make sure that the things we make, the infrastructure, the software we run is up to scratch. That's massively important. We cannot keep on blindly importing this technology, such as AI technology, from Silicon Valley or China or Russia because then we will lose our European values. There's no guarantee that they are respected elsewhere.
We must be able to demonstrate that the fundamental values we are known for in the rest of the world are safeguarded in a digital society. That was also the subject of our STOA study for the European Parliament in which we looked at the convergence of technologies like 5G, big data, the Internet of Things, AI and digital twins and what's needed to make all these things a success. Is the AI Act enough? Is privacy legislation enshrined in the GDPR enough? Can we successfully perpetuate the European model? To do so you have to have engineers and technicians who understand the context they're working in. Who understand that each component you build is always part of a much larger story."

Developing and maintaining those values in digital technology, in the Netherlands and in Europe, is something we scientists want and need to make a definite contribution to.

Geert-Jan Houben

So does that happen here at TU Delft?

GJH: "Yes, it is precisely in that way that we can make an impact and create a better society. As a university we can make the difference by the fact that we've trained our engineers in such a way that they can work in context."

JvdH: "I would go as far as to say that, for twenty years already, we in Delft have been taking the lead in the field of combining ethics and engineering. We don't just have a nice story to tell about values, but we also know what it means to build or design something." I even think we're one of the global pioneers in this field. There are lots of people in the world who invent and design fantastic things and you also have people who can tell wonderful stories and understand how important certain values are. Here we bring that all together."

GJH: "That was also the basis for the TU Delft AI Labs. That's where people come together who are involved in fundamental engineering and people who are involved in the context. That's how you create a typical Delft engineer."

JvdH: "There's a word for this and it's 'comprehensive engineering'. In other words a combination of arts, science and social sciences. To develop self-driving cars you have to be able to create technology, but also know something about human psychology, sensors, smart infrastructure, insurance, MOTs, etc. People's typical response to AI is to say something like: 'I've also heard something about algorithms…' – but the algorithm is only a very small part of a large socio-technical system and that socio-technical system is used in a certain application, in a certain context and for a certain sector in society. There it contributes to public values – or not. Whether you pay the guards well, so to speak, is as important for security as the algorithm itself. Complicated systems are only as strong as their weakest link. And everything works together, including in systems that use AI. That's how you have to think. And that's how we want to teach our engineers to think."

We train engineers and technicians who understand the context they are working in. Who understand that each component you build is always part of a much larger story.

Jeroen van de Hoven

Is this difficult for students?

JvdH: "No, it's surprising how quickly the new generation picks it all up – precisely because they know very well what the criticism is of big tech and they know what a filter bubble is. Just like students understand that there's a problem with the climate and that we really have to do something about it. The generation that is now prepared to do things better, the leaders and engineers of the future, we have to give them the right instruments to turn it into a reality. Responsible innovation with AI means that you resolve a major social problem without creating new problems and without exacerbating old ones, while still complying with legislation, as well as fundamental ethical principles. And that your innovation contributes to the type of society we want to live in."

GJH: "We are well on the way to achieving this in our teaching here as well. We're building all kinds of bridges to other disciplines and contexts. As a result people are becoming inspired and informed. It's part of our vision."