Self-driving cars will have to be ultra safe before they can become socially acceptable. A fatal accident caused by a technical fault in a self-driving car is considered at least 4.5 times worse than a fatal accident due to human error. These are among the findings from research carried out by Bart Overakker, who graduates from the Faculty of TPM at TU Delft on Thursday 12 October.
It is generally accepted that self-driving cars are just around the corner. In the long term, they are expected to increase traffic safety, improve access and reduce the impact on the environment. But they are not yet perfect; technical faults can arise. What's more, in principle it's possible to hack a self-driving car. Both situations can lead to accidents, which may even be fatal. So the introduction of self-driving vehicles is not only a technical question, but also a social and ethical problem.
‘Strangely enough, very little research has been conducted into this aspect’, says Bart Overakker. ‘The issue has been considered at the general philosophical level, and some empirical research has been carried out. But this research focuses purely on the driver and not on other potential parties. I wanted to measure the level of social acceptance for self-driving cars, focusing on the main question: do people consider fatal accidents caused by self-driving cars to be worse than fatal accidents resulting from human error?’
With backing from the Netherlands Institute for Transport Policy Analysis (KiM), Bart studied a group of 510 people. He did not ask them directly for their opinions, but asked them to make choices in a range of hypothetical future scenarios, with variations such aspects as the degree of automation, journey time, emissions and traffic fatalities. The weighting of each aspect in the eyes of the respondents was gauged using a statistical analysis.
‘The general conclusion is that 65% of people think that a system of self-driving vehicles is preferable to the current transport system. People do, however, want the system to allow for some form of human intervention and control.’
‘So generally speaking, social acceptance of self-driving cars is high. But this not the case when it comes to accepting fatal accidents. A fatal accident caused by a technical fault in a self-driving car is considered at least 4.5 times worse than a fatal accident due to human error. Intentional abuse, i.e. a fatal accident caused by hacking, is considered 6 times worse.’
So self-driving cars must be ultra safe before they can become socially acceptable. ‘We have a paradox on our hands. Self-driving cars potentially have huge advantages, including much better safety. But a lack of social acceptance for this very safety aspect is bound to slow down the process of introduction. Just a few accidents, like the one with a self-driving Tesla in Florida last year, can seriously hamper the progress of self-driving cars.’
‘I should stress that these are only the first results of research in this specific area’, says Bart Overakker. ‘They still need to be endorsed by further research. In addition, we would expect the degree of social acceptance to vary over time and, for example, according to the country.’
According to Overakker, policy makers should be working actively alongside technology companies to help decide which technologies should be made available to consumers. ‘Policy makers can explain exactly which criteria designs and safety must satisfy. Take a company like TomTom, for example, which must make absolutely sure that their products cannot be
Bart Overakker (student, Faculty TPM, Mastertrack: Complex Systems Engineering and Management), +31 643808372, email@example.com
Carola Poleij (press-officer TU Delft), +31 152787538, firstname.lastname@example.org
Picture: Autonomous Trap 001 door kunstenaar James Bridle. 'If a self-driving car is designed to read the road, what happens when the language of the road is abused by those with nefarious intent?'