Arie van Deursen
The Netherlands, much like the rest of the world, is struggling with the gradual easing of measures taken during the so-called intelligent lockdown, in order to eventually return to a society that is as closely to normal as possible. This transitional phase requires that the spread of the coronavirus be brought under control. But the way the virus spreads is closely linked to our behaviour, meaning that with every step we take to normalise society a little further, we risk further spread of the virus.
In order to get a grip on the virus, collecting data on people’s behaviour is one way forward. After all, the more we know about our behaviour, the more closely we can follow the virus, and the better we can intervene to prevent its spread. Collecting data with a focus on specific measures – such as opening schools or universities, intensifying the use of public transport, allowing large events – means we can monitor the effects of the measures taken more accurately. This makes the COVID-19 transitional phase inherently ‘data hungry’.
The much-discussed corona app is an example of this hunger for data. Such an app has the potential to be an important tool in contact research: if you collect data about the proximity of mobile phones, measured via Bluetooth, and store it during the incubation period, you can warn people that they have recently been in contact with someone who now appears to be infected. There are currently about 25 different corona apps in circulation worldwide.
However, all is not as it seems with these apps. It raises fundamental questions – questions moreover that will be equally relevant to all the upcoming digital technology meant to support the transitional phase. First of all, there is the question of accuracy. Estimating proximity based on Bluetooth signals carries a considerable margin of error. Do you have your phone in your pocket, or are you just taking a picture? What does this half meter difference mean for accuracy? What are the consequences if the app warns you incorrectly, or misses a contact moment? Unless we have a greater level of understanding about their accuracy, apps could create a false sense of safety or excessive anxiety, which can lead to too little or too much testing.s
A related question is how the data is actually used. Who will have access to which data? What rights can you derive from the data? Is it conceivable for instance, that only those who have the app installed are allowed to go to work or visit the supermarket? Will your employer get access to the data? Here too, usefulness and accuracy are closely linked.
Use and abuse are two sides of the same coin. Can the data be used for criminal purposes? Can data about physical location or contamination for example be misused for burglary, fraud or extortion? Can the data be used in combination with data collected from previous data leaks? Can government bodies (e.g. police, tax authorities), employers or tech companies try to use this data for purposes other than the coronavirus transitional phase? Various measures are conceivable to prevent abuse, such as limiting the duration of the app’s availability, storing data locally instead of centrally, and of course state-of-the-art security. Unfortunately, these measures are difficult to implement in practice. It is also questionable whether COVID-19 will ever pass; maybe we will be deploying digital technology in perpetuity for COVID-21 or even COVID-25.
Another question concerns the international context. Which digital solutions do our neighbouring countries use? How can apps help make international travel possible again? What data is shared with which countries? Can we learn from other countries? Can our technology help prevent a catastrophe in poorer countries? How much confidence do we have in technology imposed on us by other, large countries (US, China), by government or by industry (Apple, Google)?
Once the issues regarding accuracy, use and abuse, and international context have been resolved, the next question concerns who will be able to benefit from the digital solution. An app for Apple and Android phones can be used by many, but not all, people. Will the elderly still be able to use public transport without an app? And what if you are a poor student who bought a Huawei phone of which the Android version is no longer being supported by Google? Can contact tracing, even via Bluetooth, be done with the help of a perhaps less smart, but cheaper device?
Transparent decision making and technology
None of these questions is easy to answer. They require a clear understanding of the technical, legal, international, medical, social and ethical aspects of the technology to be used.
A democracy does not shy away from these questions, but embraces them. This requires a clear overview of all the considerations to be made, each with its own pros and cons. This requires open and transparent decision-making, as well as open and transparent technology.
Fortunately, software engineering has a means of allowing technology to be open: open source. Important here is that open source is not seen as an afterthought, where at the end of the project the source code is made available, but as a starting point. All considerations, design documents, requirements, design decisions, test procedures, etc. should be shared at the earliest opportunity. The corona contact tracing app is just the start: much more coronaware awaits us. It can make an important positive contribution to the 1.5 metre society and the route back to normality. This means that people should be able to trust coronaware, which requires that is developed in an open and transparent way, right from the word go.
Prof. dr. Arie van Deursen
Arie van Deursen is a professor in software engineering and heads the software engineering department, and is scientific director of AI for Fintech Research.
He conducts research in the fields of software testing, software architecture and open source software development.