AiTech paper on Meaningful Human Control published in AI and Ethics
In a paper recently published in Nature Scientific Reports, researchers from the AiTech initiative investigated the apparent mismatch between the public’s attribution of blame and finding from the human factors literature regarding human’s ability to remain vigilant in partially automated driving. Participants of the experiment blamed the driver primarily for crashes, even though they recognized the driver’s decreased ability to avoid them.
The imbalance between human-factor-related challenges with automation regarding driver ability and the participant’s responsibility attributions reveals a culpability gap. In this culpability gap, responsibility is not reasonably distributed over the involved human agents; the driver receives most blame, yet this may be unreasonable given their impacted ability to change the outcome.
The findings of this work have implications. In terms of public discourse, based on the participants’ arguments, it seems that the majority of our participants do not consider the aforementioned human-centered challenges of automated driving in their responsibility attribution. This could indicate that humans are unaware of these effects of automation, which could lead to ‘unwitting omissions’. Drivers are unaware of the impact of automated driving on their ability to perform the required driving tasks should they need to, yet they are still considered to be responsible by their peers. Providing public information about the driver-centered challenges associated with automated driving could be helpful, as well as driver training.