‘Businesses aren’t advertising their use of ai’
How do professionals see the future of artificial intelligence and the role that TU Delft can play? We asked two alumni.
‘People will still have to make the decisions’
‘The RTI lab is a collaboration between the police, TNO and the HSD (Hague Security Delta, ed). We support security organisations when updating real-time intelligence, and experiment with new methods and techniques, including AI. We always take account of different perspectives: what does AI mean to people or processes, what are the technical issues, and which legal and ethical questions arise? The police always want to use the latest technology, so we experiment to learn more about the next step. Take imaging, for example. The Netherlands is full of security cameras. Can you use AI to detect deviant patterns in all the image material they provide? There’s another new technique that could be used in the control room. When every second counts, you need to quickly understand what's going on. But what if someone is speaking a different language? Could a computer interpret the situation swiftly through its ability to learn from previous calls and understand many different languages?
Marielle den Hengst
Project leader (representing the police) of the Real Time Intelligence Lab (RTI lab)
Degree in computer sciences (EEMCS), PhD in TPM
There are numerous examples of how AI can assist us, mainly because it can base itself on enormous data sets. But AI is also a challenge, particularly for the police. Take algorithms that can calculate the risk of a criminal career. How can you make a system like this competent and transparent, without bias (prejudices, ed.)? How can you expose the reasoning of a system and prove that it isn't biased? It's for challenges like this that we need institutions like TU Delft. This is a big step, for the police too. We're an organisation that likes doing things, while a university works towards long-term knowledge development. Luckily, the police know that knowledge and innovation are essential and both worlds are gradually converging.’
‘Ai is a black box’
AI specialist at Capgemini consultancy agency
Degree programme: Industrial Design (IDE)
‘I focus on knowledge acquisition and knowledge use within organisations. AI is a way of extracting knowledge from data. At Capgemini, we mainly use practical AI solutions that work now: self-learning online product recommendations, fraud analysis of documents or processing damage reports. In the end, trust will determine whether AI is accepted or not. People and businesses want to know what a system does, but AI is a black box. Social media like Facebook and security systems are full of intelligent algorithms. China uses automatic facial recognition for its population. What happens if there's a false recognition, or if the system starts selecting on the basis of gender or skin colour? It's impossible to recall why a self-driving car made a particular decision among all those thousands of lines of code, partly because the codes influence each other. AI doesn't know enough about the world around it to place decisions in a context. AI is ‘autistic’. This is reinforced if biases slip into the systems. The ethics governing AI are highly complex, particularly because AI systems are self-learning and change their behaviour over time.
For now the consequences are usually easy to gauge as most businesses use AI for small processes and sub-systems. But this will change in the future. Is it up to me as a simple employee to say that I think that an expensive, self-learning system was wrong in rejecting an application? I'm more afraid that the opposite will happen. That we'll put blind trust in artificial intelligence, as we do other technology. People will think: if the system says so, it must be right. AI is being accepted faster than society wants. Our current discussions are too late. We need ethical, empathic artificial intelligence. Universities in Europe must take ownership of this debate at an interdisciplinary level. AI isn't a hype and is already present in many systems. Businesses aren't advertising this fact, for fear of negative publicity if things go wrong.’
(Foto’s © Sam Rentmeester)