ABSTRACT
Autonomous weapons are systems that, once activated, can identify, select and engage targets by themselves. Scharre (2018. Army of None: Autonomous Weapons and the Future of War. New York: Norton) has given a definition of autonomy based on three dimensions: the automatized tasks, the relationship with the human user and the sophistication of the machine’s decision-making process. Based on this definition of autonomy, this article provides an overview of systematic biases that may occur in each of these three dimensions. Before the deployment of autonomous systems in war it is important to know how various biases can affect the functioning of the machines and how these biases can be compensated for.
KEYWORDS:
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Notes on contributors
Teresa Limata
Teresa Limata is a Ph.D. candidate in Neuroscience at the Department of Psychology, University of Turin, Italy. Her research explores how the human body can influence cognitive processes, such as perception, memory, and communication. Beyond her academic pursuits, she has a keen interest in military-related topics, bridging psychology, philosophy, and neuroscience within military contexts. This interest led her to her Master's Thesis in criminological and forensic psychology, focusing on the enhancement of cognitive and physical capabilities within the military context, graduating with honors.