Smart systems provide support and also protect against prejudices
Michel van Eeten
Human needs and rights are perhaps most strongly expressed in peace, justice and security. So, how do we ensure that we take these into account in AI systems? That’s precisely what Prof. Inald Lagendijk and Prof. Michel van Eeten are working on.
The Dutch government is funding artificial intelligence to the tune of €300 million from the National Growth Fund. “That’s a good thing. We’re eager to explore fundamental issues facing human-oriented AI on a larger scale," says Prof. Inald Lagendijk, professor of Computing-based society and a member of the Dutch AI Coalition strategy team.
Smart systems help people to make decisions and improve processes that involve large quantities of data. Designers are accustomed to developing systems to improve efficiency, but the use of AI for government decisions or cybersecurity must also meet social, ethical and legal requirements concerning such issues as integrity and human control. Can AI make honest decisions that are manageable and accountable? A company that uses AI for recruitment purposes can efficiently screen thousands of applicants, but is the algorithm driving that process free of bias towards male or white applicants? In introducing risk profiling, has the government taken account of potential negative consequences for the public? Lagendijk: “Preferences and prejudices are part of society. TU Delft research aims to develop smart systems that protect us against that while also supporting processes around rights and security.”
To explore this, Lagendijk is researching AI design processes. How do you develop AI to enable it to improve the performance of the larger system of which it is a part? The cornerstone is TU Delft's system approach: value by design. Justice by design, security by design, human control by design. “Slowly but surely, we are discovering how to implement legal, ethical and social aspects in technology."
Oceans of data
Michel van Eeten is professor in Governance of cybersecurity. His research focuses on the relationship between behaviour and technology: what do humans need in order to improve AI innovations around cybersecurity?
Take the logging of system events, for example – it creates oceans of data. Many organisations use software that sorts data based on if/then rules devised by humans: is it suspect or not? Humans then assess the suspect data. "You can teach AI what the rules are and have it discover new rules that humans cannot see. That’s more efficient and provides more insight. But that’s just the technology.”
Stronger cybersecurity is also about people, primarily ‘listening first and only then modelling’. Plenty of smart systems have been developed that did not do what users thought, as Van Eeten is well aware. “It creates a false sense of security." It’s caused by a gap in knowledge. For example, organisational and behavioural experts do not know which values and rules of behaviour to make explicit. Van Eeten: “They often do not even realise that they have them. At the same time, engineers do not understand the complexity of behaviour and organisations.” TU Delft is unique in being a place that brings together engineering and social disciplines. “We find out what users actually need, rather than what they literally ask for. This results in innovation.”
TU Delft is at the forefront of this socio-technological innovation and also involves Erasmus and Leiden University for input on ethics and social issues. For the real-life input of data, there is also collaboration with the Municipality of The Hague and various ministries, where AI teaching and research takes place.
Increase in scale
Both Van Eeten's and van Lagendijk's work is done in relatively small projects. Lagendijk and Van Eeten are hoping that the National Growth Agenda will facilitate large-scale partnerships with players involved in the introduction and use of AI. Van Eeten: “It took about a century until statistics had successfully eradicated the biggest pitfalls, around bias and incorrect conclusions. With AI, we’re still at the start. Research on a larger scale matters: AI's effect on security and justice, and therefore also on human lives, can also be on a large-scale.”
Lagendijk is eager to see TU Delft continue to do fundamental as well as applied research, in order to provide direction to government and businesses. “And we intend to demonstrate that you not only need to take society's values, needs and standards on board, but that you can actually achieve it, thanks to the system approach at TU Delft."