Should we be afraid of self-learning algorithms?

Self-learning algorithms determine internet search results, choose which messages you see from friends on Facebook, and may eventually even decide which drugs you will be prescribed and your punishment if you commit a crime. So, should we be afraid?

Two researchers from Delft who work in this field answer the question. Virginia Dignum, associate professor in the Faculty of Technology, Policy and Management (TPM) and one of the 52 members of the High-Level Expert Group on Artificial Intelligence is positive. She expects that legal regulation will manage and control artificial intelligence. She is part of an Expert Group which advises the European Commission about policy regarding artificial intelligence.

Ibo van de Poel, Anthoni van Leeuwenhoek Professor in the Ethics of Technology and head of the department of Values, Technology & Innovation at TPM appears more concerned. ‘In the United States, an offender can be given a sentence that has been partly set by a self-learning algorithm for which only a company knows the code.’

YES/NO

‘No, we really don't need to be afraid of algorithms. They’re artefacts. You don't need to be afraid of hammers either. What you should be concerned about are the people who use hammers to do bad things. We can also make and use chemical weapons, but there are all kinds of regulations in place to stop us actually doing this. There's always a human strategy behind using algorithms.

I see that there is more and more awareness of the way that algorithms can be used, and of the need for regulation. I think that we’ll also see lots more legal constructions determining what we can and cannot do with algorithms. I’m optimistic. 

Virginia Dignum: “There's always a human strategy behind using algorithms.” Photo © Virginia Dignum

Self-learning algorithms are some times called a black box because people don't fully understand how they make their predictions. I would partly agree with this. The underlying process is simple mathematics – regressions – and we understand these. But a self-learning algorithm can involve thousands or millions of factors. It is the sheer volume makes it so difficult for people to follow the process.

“There's always a human strategy behind using algorithms.”

“II’m not scared that algorithms will soon make all our decisions. But I expect we’ll see a lot of very strange experiments. Algorithms probably won’t have the final say about who can and who can’t get a bank loan, for example. They are there to provide support. It’s up to us to decide what to use them for and what not. We mustn’t blindly follow algorithms simply because they can predict things.”

YES/NO

Ibo van de Poel: “I think we need more transparency about how algorithms work.” Photo © Sam Rentmeester 

‘There are certainly causes for concern when you consider self-learning algorithms. One of the problems is that unintentional biases (a sort of prejudice) are sometimes inherent. Take for instance the algorithms for facial recognition used at airports to pick out suspicious subjects from the crowd. If they were only trained to recognise white faces – which does actually happen - it could mean that they lack the distinguishing ability to recognise black faces. This may mean that this software, which links faces to a database, would raise the alarm for black people more often.

In the United States, they make more use of self-learning algorithms than we do here. They use the COMPAS algorithm, for example, to predict the risk of recidivism in offenders. This risk plays a role when deciding on the sentence. But this software is also rumoured to show a racial bias. 

"I think we need more transparency about how algorithms work."

We don't know exactly how the programme works, as it’s the intellectual property of a company which refuses to publish the code. So it's entirely feasible that you could be given a sentence in the USA without ever knowing what reasoning it was based on. 

I’ve heard from people working in the Ministry of Justice and Security that this would never be possible in the Netherlands because you always have the right to ask for the reasoning behind your sentence. 

Self-learning algorithms are often a sort of black box. You can of course go through the code (if it isn't kept secret), but the exact details of how it works are often incomprehensible. Is this what we really want? I think we need more transparency about how algorithms work. The problem with this is that demanding full transparency will have an adverse effect on the self-learning capacity of the algorithm. This is something that needs to be weighed up very carefully indeed.’