Who gets blamed when an accident happens? The AI system or the human relying on it? The nascent field of experimental AI-ethics has found strong evidence that AI systems are judged as responsible as humans when they negotiate traffic decisions independently or with humans as co-actors (Awad et al., 2019; Franklin et al., 2021; Moglia et al., 2021; Nyholm & Smids, 2016; Wischert-Zielke et al., 2020). Fully autonomous medical AI systems share responsibility with the clinician supervising them (McManus & Rutchick, 2019; O’Sullivan et al., 2019). But what happens when AI is merely an enhanced detection device, most closely resembling a mere instrument or tool? Would the mere instrumental use of AI leave the technology off the responsibility hook, or is the involvement of some form of intelligence sufficient to introduce attributions of responsibility?
The results of this first experiment established that human participants do attribute shared responsibility to the AI-system even though in debriefing they predominantly described the AI-system as a tool. In the follow-up, we conducted a critical control experiment showing that when the AI label was removed from the vignettes, the same scenarios did not evoke any responsibility sharing between the mechanical tool and human agent in charge.
The comparison of these conditions shows that even the most basic AI-system introduces a sharing of responsibility with their human user in stark contrast to non-AI-powered tools. This finding is all the more surprising because, when asked, people did recognise AI as a tool. Attributing responsibility to AI and reducing human responsibility also does not depend on how the AI-technology communicates with the user – i.e. via voice or haptic signals.