Contestable AI gives citizens more control over smart technology

News - 30 August 2023

Op-ed by Kars Alfrink

The use of AI is on the rise across society, including by governments. For example, AI is used in the scan cars that drive around cities and use machine vision to monitor public space. AI increases efficiency, but can we, as citizens, trust its decisions? With Contestable AI, we design decision-making systems that allow citizens to express disagreement throughout their lifecycle. This is a way to ensure that government AI is worthy of people’s trust.

I’ve always been interested in how technology shapes urban spaces, often called ‘smart cities.’ One topical example is the so-called ‘scan cars’ driving around cities like Amsterdam, Delft, and Rotterdam. Scan cars look for vehicles: Are they parked correctly? Do they have the proper permit? Now, cities are considering using these cars for monitoring other aspects of public space. Are there unlicensed trash containers about? Is there litter lying around? And what about traffic signs and street lights? Does the municipality have everything adequately mapped?

Scan cars have many functions that make the municipality work more efficiently. These vehicles use artificial intelligence (AI) to fulfill these tasks, such as image recognition.

However, does everyone feel comfortable with the use of AI? Some might fear being excessively controlled, while others worry they have no defense when a system penalizes them. 

It is essential to explain and justify the workings of AI systems to tackle these concerns. But is transparency enough? What good is an explanation when you disagree with the AI?

It is essential to explain and justify the workings of AI systems to tackle these concerns. But is transparency enough? What good is an explanation when you disagree with the AI?

― Kars Alfrink

And what about tackling concerns such as: How are AI systems built? Who gets to make the decisions that determine their development? What happens when it is discovered a system is causing harm?   

From transparency to contestability

Accountability is more important than ever for governments. Trust in local government has been declining in the Netherlands for some time. Forty percent of Dutch people say they have little to very little confidence in their municipality. 

Scan cars are not beyond reproach either. Remember the fuss about scanning people during the coronavirus pandemic in Rotterdam? During lockdown, people were obliged to keep a 1.5-meter distance from each other in public spaces. The Rotterdam police used scan cars to check this emergency rule. In December 2022, the police were reprimanded for this practice because it violated privacy law.

An opportunity to challenge

I am fascinated by how people appropriate technology for their own ends and how, as a designer, I can actively contribute to these practices. With Contestable AI, we design decision-making systems that are open and responsive to disputes throughout their entire lifecycle, not only during the development of the AI system but also after its rollout.

With Contestable AI, we design decision-making systems that are open and responsive to disputes throughout their entire lifecycle, not only during the development of the AI system but also after its rollout.

― Kars Alfrink

Contestability begins when an AI system is being developed. There need to be opportunities for citizens to exchange opinions and ideas on the system, its use, and its purpose. In addition, the chosen technology should always be reversible and adaptable because it is impossible to foresee all consequences beforehand. After all, society is constantly changing.

The second pillar of Contestable AI is for citizens to be able to challenge AI decisions. The data and procedure on which a decision was based – such as in the case of a fine – must be made available in an understandable and accessible way. What is more, citizens need to be provided with means to speak out and to engage with officials on an equal footing.

An interesting case is Amsterdam. The capital's scan cars check whether a parking space was paid for. At the same time, they run extra data checks – such as whether the vehicle is listed as stolen. Occasionally, a driver is wrongly fined. If a citizen believes a fine is incorrect, they can access a website and review the scan car’s images before deciding whether or not to challenge the penalty. This was also a learning process for Amsterdam, at least partly spurred on by its vocal citizens.

We need to think about both technology and society at the same time. Design has a lot to offer for tackling these challenges.

― Kars Alfrink

Designing AI for a complex world 

We, as designers, have to take responsibility for the consequences of the systems we help build. Ethical challenges linked to AI cannot be resolved with purely technical solutions. We need to think about both technology and society at the same time. Design has a lot to offer for tackling these challenges. It combines knowledge from engineering and social science. And crucially, designers are great at creatively exploring novel solutions, even when a situation is incredibly complex.

As citizens, we want a government with a human face. To get there, governments must engage citizens in decisions about AI before, during, and after implementing new systems. This is not easy, but it is a crucial basis for trustworthy government AI.
 

Watch the concept video about Contestable AI: