2,522
Views
1
CrossRef citations to date
0
Altmetric
The Court of Public Opinion

Algorithmic Aversion? Experimental Evidence on the Elasticity of Public Attitudes to “Killer Robots”

Abstract

Lethal autonomous weapon systems present a prominent yet controversial military innovation. While previous studies have indicated that the deployment of “killer robots” would face considerable public opposition, our understanding of the elasticity of these attitudes, contingent on different factors, remains limited. In this article, we aim to explore the sensitivity of public attitudes to three specific factors: concerns about the accident-prone nature of the technology, concerns about responsibility attribution for adverse outcomes, and concerns about the inherently undignified nature of automated killing. Our survey experiment with a large sample of Americans reveals that public attitudes toward autonomous weapons are significantly contingent on beliefs about their error-proneness relative to human-operated systems. Additionally, we find limited evidence that individuals concerned about human dignity violations are more likely to oppose “killer robots.” These findings hold significance for current policy debates about the international regulation of autonomous weapons.

The ability of states to leverage technological advances for military purposes constitutes one of the key components of power in international relations, with artificial intelligence, machine learning, and automation leading the way in global military innovation.Footnote1 One of the most prominent—and controversial—applications of these technologies nowadays concerns the development of “Lethal Autonomous Weapon Systems” (LAWS), sometimes dubbed “killer robots.” If deployed on the battlefield, these weapon systems could select and engage targets without direct human oversight with unprecedentedly high speed and accuracy.Footnote2 LAWS raise serious legal and ethical concerns, and international debates are already underway about possible limitations or even an outright ban on their use.Footnote3

As in the case of earlier humanitarian disarmament initiatives, the proponents of a ban on LAWS highlight the public opposition to these weapon systems as one of the key arguments why their use should be prohibited altogether. These claims, promoted by like-minded states and non-governmental organizations (NGOs), are partially supported by evidence from public opinion surveys suggesting that the real-world employment of LAWS would be met with significant disapproval by ordinary citizens.Footnote4 However, previous research has shown that much depends on the context. For example, Michael C. Horowitz found that public opposition to LAWS weakens when individuals are presented with scenarios where “killer robots” provide greater military utility than alternative options.Footnote5 Our knowledge of the factors that affect public attitudes to LAWS is, nevertheless, still limited.

In this article, we address this gap by investigating the role of three factors that are central to the international debate on whether or not to regulate the use of LAWS. The first are consequentialist concerns that the autonomous technology is particularly accident-prone. Second, legal concerns that machines cannot be held responsible for striking the wrong targets. Third, moral concerns that delegating decision-making powers over life and death to robots violates human dignity. While our primary goal is to investigate the public’s sensitivity to certain factors surrounding the use of “killer robots,” rather than explaining the all-else-equal aversion to LAWS per se, the results of our study also provide some hints about mechanisms underlying these attitudes.

To test whether and how these factors affect public support for LAWS, we conducted a survey experiment with 999 U.S. citizens. We randomly assigned the participants to one of five versions of a hypothetical scenario describing a UN-mandated counterinsurgency operation, where the commander decided whether to deploy a remote-controlled or autonomous drone to eliminate the insurgent threat. Our experimental treatments varied in terms of the risk of target misidentification associated with each drone option and responsibility attribution for potential civilian fatalities. To measure the support for LAWS, we asked the participants to indicate their preference for either the remote-controlled or autonomous drone. In a follow-up survey with the same participants, we examined their sensitivity to violations of human dignity.

To gain a deeper understanding of the relationship between our three factors and support for LAWS, we conducted two additional surveys with separate samples of U.S. citizens. In these surveys, we inquired about our participants’ risk estimates and their perceptions of the differences between remote-controlled and autonomous drones in various aspects of their use.

Our findings demonstrate that although there is a substantial baseline aversion to LAWS among the public, these attitudes also exhibit a significant degree of elasticity. Most importantly, we find empirical evidence that public support for LAWS is contingent on their error-proneness relative to human-operated systems. When the public is presented with information indicating that the risk of target misidentification is even slightly lower for the autonomous drone compared to the remote-controlled one, there is a rapid and significant shift in favor of using these weapon systems. In contrast, our findings do not provide empirical support for the proposition that the explicit mentioning of command responsibility can alleviate opposition to LAWS. Additionally, we find limited empirical support for the proposition that concerns about human dignity violations increase public opposition to these weapon systems. Overall, among the three factors examined in our study, the consequentialist concern about the accident-prone nature of “killer robots” has the strongest association with the attitudes of Americans.

Our findings contribute to the growing scholarly literature on the international efforts to regulate autonomous weapons by probing the public’s sensitivity to some of the frequently cited pro- and anti-LAWS arguments.Footnote6 Moreover, we contribute to the recent wave of Security Studies literature on public attitudes toward the use of force by examining the factors affecting the public support for particular means of warfare.Footnote7 Finally, our study has significant implications for current policy debates about the international regulation of LAWS. The elasticity of public attitudes toward “killer robots” demonstrated here raises concerns about the long-term sustainability of public support for potential limitations or prohibitions on these systems.Footnote8 If LAWS eventually prove to be more reliable in target discrimination than human-operated systems, we could potentially observe an increasing public demand for their use.

We proceed as follows. First, we present a brief overview of the debates concerning the use of LAWS in warfare. Second, we formulate our theoretical expectations related to the three central concerns about these weapon systems. Third, we introduce our experimental design. Fourth, we present and discuss our empirical findings. We conclude by laying out the implications of our study and discussing avenues for future research.

The Advent of LAWS

Emerging technologies such as artificial intelligence and autonomous machines have significant influence over military weaponry and the character of contemporary warfare.Footnote9 While the debate about the “revolutionary” effects of these technologies is still ongoing, the growing investment in autonomous military technologies has already become a reality.Footnote10 The forerunners in this new era of military-technological competition are primarily great powers such as the United States, China, and Russia, but also smaller, technologically advanced countries such as Israel, Singapore, and South Korea.Footnote11

LAWS are clearly the more notable—and controversial—direction in this area of military innovation. In simple terms, LAWS can be defined as weapon systems that, once launched, select and engage targets without further human input.Footnote12 It is precisely the autonomy in targeting that distinguishes LAWS from other weapon systems, including remote-controlled drones, that may incorporate autonomy in functions such as navigation, landing, or refueling, but humans retain decision-making power over target selection and engagement.Footnote13 An example of such a system could be the Israeli loitering munition “Harpy” that, once launched, detects and dive-bombs enemy radar signatures without further human input. When it finds a target that meets the preprogrammed parameters, persons responsible for its launch can no longer override its actions.Footnote14

The development of LAWS presents us with potential benefits as well as challenges. On the one hand, some believe that weapon autonomy promises advantages such as a speed-based edge in combat, reduced staffing requirements, or reduced reliance on communication links. Like remote-controlled drones, their use would also reduce the risks faced by human soldiers.Footnote15 On the other hand, some believe that the machine-like speed of decision-making implies that militaries will exercise less control over the way LAWS operate on the battlefield, which exacerbates the risks of accidents and unintended escalation.Footnote16 Moreover, the unpredictable nature of complex autonomous systems would pose challenges to ensuring compliance with international humanitarian law (IHL).Footnote17 Finally, their use could be deemed dehumanizing, because—as inanimate machines—LAWS will never truly understand the value of human life, and the significance of taking it away.Footnote18

International discussions about these and other challenges have been occurring at the UN Convention on Certain Conventional Weapons (CCW) since 2013. In 2016, States Parties to the CCW established what is known as the Group of Governmental Experts (GGE) to formulate recommendations on how to address the LAWS issue. However, a growing polarization between states interested in exploring the benefits of the technology and those concerned with its humanitarian impact has prevented any substantive progress on the issue.Footnote19 Undeterred by the failure of the GGE LAWS process, NGO campaigners gathered under the auspices of the “Campaign to Stop Killer Robots” are now looking to explore the possibility of moving the issue to a different venue, where willing states could agree to prohibitions on the development and use of LAWS.Footnote20

The utility of autonomous weapons for political and military purposes will, to an extent, depend on their public acceptance. This aspect of the discussion is especially relevant for proponents of the ban, who leverage the negative public attitudes expressed in opinion polls on “killer robots” as a compelling reason for prohibiting the technology.Footnote21 In this view, the use of “killer robots” despite public opposition would violate the Martens Clause in the 1977 Additional Protocols to the Geneva Conventions, which prohibits the use of means and methods of warfare contrary to the “dictates of public conscience.” While the interpretations of the clause differ, considerations of public conscience have driven international negotiations on prohibiting other weapon systems in the past.Footnote22 Investigating the public attitudes to LAWS and the factors that affect these attitudes is, therefore, pertinent with respect to ongoing international regulatory efforts.

Algorithmic Aversion?

In many fields, from diagnosing complex diseases to legal advice, algorithms already outperform human decision-makers.Footnote23 Yet, we observe that the public often rejects algorithmic decision-making in favor of human expertise even when the latter is objectively inferior. To date, researchers have identified various factors that influence this aversion, including seeing an algorithm err, the complexity of the task, or the type of task performed.Footnote24 People’s reasoning for rejecting algorithms seems to vary across domains. For instance, in the field of medical diagnosis, consumers prefer the advice of human doctors to that of algorithms because they believe that the latter cannot account for their “unique characteristics and circumstances.”Footnote25

Existing surveys indicate such resistance toward autonomous weapons. For example, Charli Carpenter found that approximately 55% of adult U.S. citizens opposed the use of LAWS.Footnote26 The polling company Ipsos conducted several cross-national surveys on behalf of Human Rights Watch, which showed that about 61% of respondents worldwide oppose “killer robots.”Footnote27 Potential aversion is also indicated by recent surveys of AI and machine-learning researchers and local officials in the US.Footnote28

Although all of these findings suggest a strong public aversion to LAWS, they also have limitations that prevent us from drawing clear conclusions. Notably, they often ask the respondents about their views on LAWS without providing further context and may reflect general pacifist attitudes rather than a genuine aversion to autonomous weapons per se. Arguably, more compelling evidence on public attitudes to “killer robots” comes from the small number of survey experiments that explore the influence of factors such as military effectiveness, responsibility attribution, and sci-fi literacy.Footnote29 The study by Michael C. Horowitz, in particular, demonstrates that much of the opposition to these weapon systems depends on context. When the public is presented with scenarios where “killer robots” offer superior military utility compared to alternative options, the opposition weakens substantially.Footnote30 However, our knowledge of the specific factors that affect these attitudes is still limited.

In the following subsections, we will discuss three such potential factors that are at the core of the international debate on regulating LAWS. The first are consequentialist concerns that the autonomous technology is particularly accident-prone. The second involves legal concerns that machines cannot be held responsible for striking the wrong target. Third are moral concerns that delegating decision-making powers over life and death to robots violates human dignity. While not exhaustive, this list represents some of the most frequently cited arguments in favor of regulating LAWS. Investigating whether and how factors such as the risk of an error, responsibility attribution, and considerations of human dignity affect public attitudes to “killer robots,” thus, holds particular policy relevance.

Like other authors, we distinguish between contingent and non-contingent concerns.Footnote31 Contingent concerns revolve around the limitations of current-generation technology, including the inability of LAWS to properly discriminate between lawful and unlawful targets. Non-contingent concerns are independent of technological advancements and encompass arguments highlighting the inherent immorality of automated killing. Certain concerns, such as responsibility attribution, can be classified as contingent or non-contingent depending on whether they are regarded as an issue of strict liability or principled justice. If public attitudes are primarily driven by contingent concerns, it is possible that attitudes may change as the technology evolves. In the case of non-contingent concerns, attitudes are less likely to shift regardless of technological progress.Footnote32

Accident-Proneness

According to Paul Scharre, humans exhibit two basic intuitions about autonomous machines.Footnote33 Some harbor a utopian intuition about autonomous systems as a reliable, safe, and precise alternative to human-operated systems. Others hold an apocalyptic intuition about “robots run amok,” which holds that autonomous systems are prone to spiraling out of control and making disastrous mistakes. The prevalence of the latter belief—based on a distrust of the predictability and reliability of autonomous systems and compounded by the public’s exposure to “robocalyptic” imaginaries in popular culture—may, therefore, drive the corresponding aversion to the use of LAWS.Footnote34

A recent public opinion survey showed that 42% of respondents opposed to LAWS were worried that these systems would be subject to technical malfunctions.Footnote35 Such worries are not completely unfounded. Militaries worldwide already struggle with highly automated systems, as illustrated by several fratricidal incidents with the Patriot missile defense system during which the system misidentified friendly aircraft as an enemy missile.Footnote36 On a more general level, machine learning systems have repeatedly shown a propensity for making unforeseen and counterintuitive mistakes.Footnote37 Unlike humans, algorithms might not have a sufficient capacity to understand nuance and context. They are trained to make clear verdicts under specific conditions and they may operate incorrectly for long periods of time without changing their course of action because they lack the necessary situational awareness.Footnote38

These technical limitations have potentially far-reaching implications for upholding basic ethical standards on the battlefield. From the perspective of IHL, ensuring compliance with the principle of distinction presents perhaps the most daunting challenge for LAWS. Many authors contend that today’s autonomous systems would not be able to comply with this principle, which protects civilians, non-combatants, and combatants hors de combat, from being subject to the use of military force.Footnote39 So far, scientists have developed no technology that can distinguish between lawful and unlawful targets as well as human judgment can. These inadequacies are particularly troubling when put in the context of a modern-day battlefield environment, which is typically populated by both combatants and civilians, and where belligerents deliberately obfuscate their legal status.Footnote40

Many of these challenges are, nevertheless, contingent on the state of technology. In principle, technological advances may eventually enable LAWS to discriminate between lawful and unlawful targets at least as well, if not more reliably, than human-operated systems.Footnote41 In the future, autonomous weapons could even prove more discriminating then humans because they do not have to protect themselves in cases of low certainty of target identification, they can be equipped with a broad range of sensors for battlefield observation and process the input far faster than a human could, or because they could be programmed without the emotions that often cloud the judgment of ordinary soldiers.Footnote42

Irrespective of whether LAWS will eventually prove more reliable at target discrimination, it is plausible that the public opposition to their use is partially influence by prior beliefs about their error-proneness relative to humans. Evidence from the field of experimental psychology further suggests that individuals are much more tolerant of errors made by humans than those made by machines.Footnote43 In the case of LAWS, such considerations should be particularly salient, because an error might result in target misidentification leading to innocent deaths, which is a pertinent concern of the public vis-à-vis the use of military force.Footnote44 If we assume that public attitudes to “killer robots” are influenced by preexisting beliefs about the accident-prone nature of the technology, we would expect that presenting the public with scenarios where the use of autonomous systems carries a lower risk of target misidentification compared to human-operated systems would increase the support for LAWS.

Hypothesis 1:

Public support for LAWS will increase with the lower risk of target misidentification relative to human-operated systems.

Responsibility Gaps

Another objection to LAWS is that their use would lead to distinct “responsibility gaps.”Footnote45 In this view, the employment of “killer robots” on the battlefield could make it exceedingly difficult, if not impossible, to establish legal and moral responsibility for potential adverse outcomes such as fatal accidents or outright war crimes. It would be unfair to hold a human in the decision-making chain responsible for outcomes they could not control or foresee, yet we simultaneously could not hold the machine itself accountable because it lacks the moral agency for such responsibility.Footnote46 The emergence of responsibility gaps presents a potential challenge to adherence to the IHL, and some scholars even suggest that if the nature of LAWS makes it impossible to identify or hold individuals accountable, it is morally impermissible to use them in war.Footnote47

Other scholars have contested the above proposition on legal and moral grounds. On the legal side, some argue that the fact that no-one has control over the system’s post-launch targeting decision does not take away the responsibility for its actions from humans. For instance, the programmers who decided how to program the system or the commanders who decided when to deploy it would be held accountable for any atrocities caused by LAWS if they acted with disregard to IHL.Footnote48 On the moral side, some experts question whether the ability to hold someone accountable for battlefield deaths is a plausible constraint on just war.Footnote49 Marcus Schulzke observes that militaries already operate through shared responsibility, where “[t]he structure of the military hierarchy ensures that actions by autonomous human soldiers are constrained by the decisions of civilian and military leadership higher up the chain of command.”Footnote50 This “command responsibility” would arguably be equally applicable to LAWS. Programmers and commanders would be accountable to the extent that they failed to do what is necessary to prevent harm.Footnote51

Existing surveys suggest that responsibility could be a concern for the broader public. The results of the 2020 Ipsos survey indicate that approximately 53% of those who opposed the development and use of LAWS agreed that such systems would be unaccountable.Footnote52 We aim to test the responsibility gaps proposition by looking at whether and how public attitudes change when we place the onus of potential responsibility explicitly on the military leadership, as opposed to leaving this to the public’s imagination. If the public believes that LAWS are unaccountable, as suggested by previous surveys, it is plausible that the opposition to LAWS could be partly driven by fears that the parties involved in their development and use could escape accountability for adverse outcomes. If we assume that public attitudes to “killer robots” are influenced by such beliefs, we would expect that presenting the public with a scenario in which the military leadership explicitly assumes such responsibility would increase the support for LAWS.

Hypothesis 2:

Public support for LAWS will increase with the knowledge that there are accountable military officers in the chain of command.

Human Dignity

One non-contingent issue that cannot be addressed by means of technological progress is the idea that using autonomous machines to kill humans would constitute an affront to human dignity.Footnote53 In this view, death by algorithm implies treating humans as mere targets, or data points, rather than complete and unique human beings.Footnote54 Consequently, LAWS are believed to violate the fundamental right to human dignity, which prohibits the treatment of humans as mere objects.Footnote55

Whereas concerns about target discrimination typically focus on the outcome of the deployment and use of LAWS, the argument about undignified killing focuses precisely on the process.Footnote56 According to Frank Sauer, “…being killed as the result of algorithmic decision-making matters for the person dying because a machine taking a human life has no conception of what its action means…”Footnote57 A human soldier could deliberate and decide not to engage a target, but in the case of LAWS, there would be no possibility for the victim to appeal to the humanity of the attacker—the outcome would already be predetermined by the narrow nature of algorithmic decision-making.Footnote58 The absence of such deliberation would thereby make targeting decisions inherently unethical.Footnote59

The introduction of the “human dignity” argument to the LAWS literature has not been without controversy, and many experts view it as a contested and ambiguous concept.Footnote60 Critics typically counter with consequentialist arguments: Concerns about indignity are ultimately outweighed by the promise of improved military effectiveness and reduced risks of target misidentification compared to human-operated systems.Footnote61 While it is true that machines cannot comprehend the value of human life, it may be irrelevant to the victim whether they are killed by a human or a machine.Footnote62 The conduct of human soldiers already falls short of ethical standards invoked in anti-LAWS arguments. Some experts also stress that the concept is too vague, and there is little reflection in the literature on the interpretation and meaning of the term.Footnote63

While the logical coherence of the human dignity argument is disputed, it underscores the role of moral instincts as one of the potential drivers of the public aversion to “killer robots.” The 2020 Ipsos survey provided some empirical evidence for the centrality of these instincts in public attitudes. Approximately 66% of respondents who opposed LAWS expressed the belief that delegating lethal decision-making to machines would “cross a moral line.”Footnote64 If we assume that moral instincts play a role in shaping public attitudes to “killer robots,” we would expect to observe a negative correlation between individuals’ sensitivity to the infringement of human dignity and their support for the use of these weapon systems.

Hypothesis 3:

Individuals who are more concerned about the violations of human dignity are less likely to support LAWS.

Experimental Design

To test these hypotheses, we designed an original survey experiment with vignettes describing a hypothetical UN-mandated multinational counterinsurgency operation.Footnote65 We divided the vignette into three parts. In the first part, we informed the participants that their country joined a multinational task force to bring an end to a Boko Haram insurgency in Nigeria and asked them about their views on the importance of counterinsurgency operations to U.S. national security. In the second part, we communicated that the task force received intelligence about a suspected insurgent training camp located in a nearby village. The commander in charge was deciding whether to use a remote-controlled or autonomous drone to eliminate the threat. We then asked how much they approved of countries conducting drone strikes abroad.Footnote66

In the third part, we outlined the difference between the commander’s options as follows: “A remote-controlled drone is an aircraft controlled remotely by a human pilot who decides which target to hit with the missile,” while “an autonomous drone is an aircraft controlled by a computer program that, once activated, decides which target to hit with the missile without further human input.” The latter description emphasizes the autonomy that LAWS display in the critical functions of target selection and engagement.Footnote67

Moreover, we informed our participants that while striking the insurgents from afar would keep the UN troops out of harm’s way, civilians on the ground may still be at risk of injury or death as they may be wrongly identified as targets. In the remaining part of the survey, we experimentally varied the risk of target misidentification associated with each weapon option and the explicit accountability of military officers involved in the strike. Respondents were randomly assigned to one of the five conditions: (1) control, (2) “equal risk,” (3) “equal risk + responsibility,” (4) “unequal risk,” and (5) “highly unequal risk.” The difference between our treatments is outlined in .

Table 1. Design of the main experiment.

The control group did not include any additional information. In the four remaining groups, we noted that the data from previous operations showed that using one of the weapon options would result in either an equal or a greater risk of misidentifying civilians as insurgents. In the “equal risk + responsibility” group, we further informed the participants that there were military officers in the chain of command whose responsibility was to minimize such risks and who would be held accountable if the strike resulted in the unlawful killing of civilians. Throughout our treatments, we deliberately chose more conservative risk percentages to ensure the plausibility of the scenario. We made this decision because launching a military strike with a higher probability of civilian fatalities could be perceived as inherently indiscriminate. The values are partially informed by data from the Bureau of Investigative Journalism on U.S. drone strikes, which suggests that approximately one in eight fatalities is a civilian fatality.Footnote68

After the participants read all three parts of the vignette, we asked whether they preferred the commander to conduct the strike using the remote-controlled or autonomous drone.Footnote69 Consistent with previous research investigating public attitudes to nuclear weapons use, our analysis relies on the binary preference variable.Footnote70

Furthermore, we included an open-ended question to investigate the reasoning behind our participants’ drone preferences, along with a manipulation check that asked about the percentage risk mentioned in the third part of the fictional scenario.Footnote71 The participants’ responses to the open-ended question, in particular, offer additional insights into the potential mechanisms underlying the aversion to LAWS when all else is equal. The survey questionnaire also included a battery of pre-treatment questions on age, gender, income, education, and political orientation. Following Michael C. Horowitz, we also inquired about our participants’ attitudes toward robots (a 6-point scale from “very positive” to “very negative”). These variables serve as controls in our regression models.Footnote72 Lastly, after filling out their responses, the participants read a short debrief to counteract the potential conditioning effects of the experiment.Footnote73

We fielded the survey through the online polling platform Prolific to a sample of 999 U.S. adult citizens between June 7 and June 9, 2022.Footnote74 Surveying the American public has a distinct policy relevance, considering the United States’ leading role in developing these technologies. In order to increase the representativeness of our sample, we used quotas for gender and party identification (Republican, Democrat, and Independent) since the Prolific platform tends to attract more males, younger, and more liberal and educated participants.Footnote75 Despite implementing these quotas, our sample differs somewhat from the U.S. population in terms of educational attainment and party identification.Footnote76

The experimental design allowed us to test hypothesis 1 by examining the variations in public attitudes across the control, “equal risk,” “unequal risk,” and “highly unequal risk” groups. Similarly, we tested hypothesis 2 through the experimental design by examining the variations in public attitudes across the “equal risk” and “equal risk + responsibility” conditions. However, for hypothesis 3, we opted for a correlational design. This decision was motivated by the assumption that the concern about human dignity is non-contingent or inherent to autonomous technology, and, therefore, independent of the presented context, except for the type of weapon used.

To test hypothesis 3, we administered a follow-up survey to the same sample of U.S. participants after one month. Here, we asked the participants three questions related to human dignity. The first two questions asked how much they agreed with the statement that “even terrorists should be treated with dignity” and “only humans should be allowed to kill other humans” (a 6-point scale from “strongly disagree” to “strongly agree”). We then presented the participants with another hypothetical scenario describing a situation conceptually analogous to that described in the main experiment. Our respondents read that their government was considering the introduction of a new execution method in the prison system which would involve the use of an autonomous execution module.Footnote77 The logic of this example is similar to the use of LAWS in that a machine decides about human life. After they read the scenario, the respondents indicated how much they agreed with several randomized statements, including the statement that the new method would “present a more serious violation of prisoners’ dignity.” We used these three questions to create the “human dignity concern” variable (1 = minimum concern to 6 = maximum concern) by obtaining a simple average of the three items.

In addition to the main experiment and the follow-up on human dignity, we conducted two supplementary surveys with different samples of respondents, utilizing the same polling platform and quotas. In a first supplementary survey, involving 300 U.S. citizens, we presented participants with the same hypothetical scenario as our control group in the main experiment. They were asked to provide their estimates of the risk of target misidentification associated with each drone option, and subsequently, we inquired about their preference for either the remote-controlled or autonomous drone. The survey results provide us with a deeper understanding of the public’s baseline beliefs about the accident-prone nature of the technology.Footnote78

In a second supplementary survey, involving 1,037 U.S. citizens, we randomly assigned the respondents to one of three conditions identical to the control, “equal risk,” or “equal risk + responsibility” conditions from the main experiment. We then asked them about the perceived differences between remote-controlled and autonomous drones in several aspects of their hypothetical use: legal accountability, moral responsibility, military effectiveness, force restraint, costs, ethicality, and human dignity. The respondents rated the remote-controlled and autonomous drones using a 5-point scale, indicating whether one is better, worse, or equal to the other in these aspects. We asked half of the participants whether they preferred the remote-controlled or autonomous option before answering the questions about perceived differences, and the other half after answering those questions. We randomized the sequence of questions in these parts of the survey. The results of this survey allowed us to conduct additional robustness checks and investigate other potential factors affecting public attitudes toward LAWS.Footnote79

Empirical Findings

Accident-Proneness

First, we examined the participants’ preferences for the use of LAWS across treatments. indicates that the support for LAWS, measured by the preference for autonomous drones over remote-controlled ones, correlates with their precision relative to remote-controlled systems (i.e., non-LAWS). Only 7% of participants in the control group preferred the autonomous drone suggesting a remarkably high baseline aversion to LAWS when the risk of target misidentification is not explicitly stated. As the risk of target misidentification increases for the remote-controlled drone compared to the autonomous one across the experimental groups, the proportion of participants preferring autonomous drones experiences a significant increase.

Figure 1. “LAWS preference” by the experimental group.

Note: 95% CIs. N = 810. Lower N is due to the exclusion of our “equal risk + responsibility” treatment.

Figure 1. “LAWS preference” by the experimental group.Note: 95% CIs. N = 810. Lower N is due to the exclusion of our “equal risk + responsibility” treatment.

To test hypothesis 1, we conducted a series of logistic regressions and investigated whether providing our respondents with additional information on the risk of target misidentification had a statistically significant impact on “LAWS preference.” The results, depicted in , reveal that respondents in the “equal risk” group were significantly more likely to prefer the autonomous drone over the remote-controlled drone than those in the control group (OR = 2.011, p = 0.034). This finding suggests that at least some participants may believe that the risk of misidentifying civilians is higher for the autonomous drone when not stated otherwise.Footnote80

Figure 2. Comparison of experimental treatments.

Note: Results of the logistic regression. 95% CIs. “Equal risk – Control” N = 418; “Unequal risk – Equal risk” N = 390; “Highly unequal risk – Unequal risk” N = 392. See Appendix 7 for full results and robustness checks.

Figure 2. Comparison of experimental treatments.Note: Results of the logistic regression. 95% CIs. “Equal risk – Control” N = 418; “Unequal risk – Equal risk” N = 390; “Highly unequal risk – Unequal risk” N = 392. See Appendix 7 for full results and robustness checks.

Most importantly, we find that when the risk of target misidentification is modestly higher for the remote-controlled drone option, such as in the “unequal risk” treatment, our participants are much more likely to prefer the autonomous drone, even when compared to the “equal risk” treatment (OR = 10.24, p < 0.001). This finding suggests that public support for LAWS is significantly contingent on the knowledge of the relative risk. However, informing the participants that using the autonomous drone will entail a substantially lower risk of target misidentification did not completely mitigate the aversion to their use. Our participants in the “highly unequal risk” group were not significantly more likely to prefer LAWS than those in the “unequal risk” group (OR = 1.241, p = 0.308). In Appendix 7, we show that these findings hold when subjected to various robustness checks.

Our priming exercise regarding the relative risk of target misidentification highlights the considerable degree of elasticity of public attitudes to “killer robots.” However, these findings do not definitively indicate that concerns about the accident-prone nature of the technology are the sole or primary cause of the aversion to LAWS. To explore this possibility further, we turn to our participants’ responses to the open-ended question, in which we inquired about the reasoning behind their choice. We find that the majority of participants in the control group who preferred the remote-controlled drone option expressed such concerns. For instance, they frequently mentioned that having a human in the loop for target selection and engagement would be less likely to result in an indiscriminate attack:

“I think there may be less error because a real person could probably better identify innocent civilians vs the ‘bad guys’.”

“I am more comfortable giving control of life and death situations to a trained human being rather than an autonomous program. I am concerned the autonomous drone will not be able to distinguish between enemies and civilians.”

“Humans, in this situation, have better ability to understand and set the target than a robot.”

Additionally, a considerable number of respondents voiced a general lack of trust in the reliability and appropriate level of sophistication of the technology when it came to making targeting decisions. These responses indicate that some participants perceive such systems as inherently more prone to accidents compared to human-operated systems:

“I don’t believe AI is advanced enough to properly distinguish between targets yet.”

“[I chose the remote-controlled drone because] at least there’s a human making the ultimate decision in identifying targets, which I trust more than an algorithm.”

“A remote-controlled drone makes it less likely for the drone to perform unexpected actions.”

To gain a deeper understanding of the public’s baseline perceptions regarding the relative risk of target misidentification, we analyzed the results of our survey on risk estimates, fielded to a different sample of 300 U.S. citizens. We presented the participants with the same scenario as our control group in the main experiment and asked them to estimate the risk of target misidentification for each of the two types of drones. The results of a paired t-test reveal that our participants systematically estimated the risk to be higher for the autonomous drone (around 55%) than for the remote-controlled drone (around 37%).Footnote81

After obtaining the risk estimates from our participants, we proceeded to inquire about their drone preference. We calculated a “risk estimate difference” measure by subtracting the risk estimate for the remote-controlled drone from the risk estimate for the autonomous drone. We used this measure as a predictor of “LAWS preference.” The results of the logistic regression reveal a statistically significant and positive association between the “risk estimate difference” and the dependent variable.Footnote82 Thus, participants who estimated the risk to be lower for the autonomous drone relative to the remote-controlled drone were significantly more likely to prefer LAWS. However, as evident from , participants who believed the risk to be roughly equal were still more likely to prefer the remote-controlled option.

Figure 3. Adjusted predictions of risk estimate difference.

Note: A plot of predictive margins based on the results of a logistic regression. 95% CIs. N = 300. See Appendix 8 for full results.

Figure 3. Adjusted predictions of risk estimate difference.Note: A plot of predictive margins based on the results of a logistic regression. 95% CIs. N = 300. See Appendix 8 for full results.

Overall, our study provides compelling evidence in support of hypothesis 1. The results show that presenting the public with scenarios in which autonomous drones are less prone to wrongly identifying civilians as targets than human-operated systems significantly increases the support for their use. Public attitudes toward LAWS, therefore, appear to be contingent upon knowledge of the relative risk of target misidentification. In addition, the frequent mention of concerns about the inadequate level of technological sophistication and the inferior ability of algorithms to distinguish between civilians and targets in the open-ended responses suggests that beliefs about the accident-prone nature of the technology could be one of the major drivers behind the baseline aversion to “killer robots.” This is further substantiated by the results of the survey on risk estimates, which reveal that the public consistently perceives the risk to be higher for the autonomous drone, and that these beliefs affect their preferences.

Responsibility Gaps

To test hypothesis 2, we initially examined the differences in preferences among our participants in the “equal risk” and “equal risk + responsibility” treatments from the main experiment. illustrates that the preference for using LAWS was roughly equal between the two groups. The results of the logistic regression of “LAWS preference” reveal that informing the participants in the “equal risk + responsibility” group about the presence of accountable military officers did not significantly increase their preference for the autonomous drone compared to the participants in the “equal risk” group.Footnote83

Figure 4. “LAWS preference” by the experimental group.

Note: Error bars represent 95% CIs. N = 391. Lower N is due to the exclusion of the control group and our “unequal risk” and “highly unequal risk” treatments.

Figure 4. “LAWS preference” by the experimental group.Note: Error bars represent 95% CIs. N = 391. Lower N is due to the exclusion of the control group and our “unequal risk” and “highly unequal risk” treatments.

This null finding should not be interpreted as evidence that responsibility did not matter to our participants at all. For instance, it is possible that informing the participants about the presence of accountable military officers in the chain of command simply did not do enough to mitigate such concerns. Nevertheless, responses to the open-ended question rarely mentioned responsibility concerns.Footnote84

The lack of a statistically significant effect of our responsibility prime and the infrequent mention of concerns about responsibility attribution in write-in responses may be due to prior beliefs that someone will be held accountable even when LAWS are used. Conversely, the public may believe it would be equally difficult to hold someone accountable if a remote-controlled or autonomous drone was used. To explore this possibility, we examine the results of our survey on the perceived differences between remote-controlled and autonomous drones. reveals that most respondents (57%) found it more challenging to attribute legal accountability for civilian fatalities caused by autonomous drones compared to remote-controlled drones. Similarly, most respondents (55%) found it more difficult to assign moral responsibility for civilian fatalities caused by autonomous drones compared to remote-controlled ones.Footnote86

Figure 5. Perceived differences between remote-controlled and autonomous drones.Footnote85

Note: N = 1,037. The figure presents the aggregated results for all experimental treatments. See Appendix 14 for full results.

Figure 5. Perceived differences between remote-controlled and autonomous drones.Footnote85Note: N = 1,037. The figure presents the aggregated results for all experimental treatments. See Appendix 14 for full results.

However, despite the public perception that responsibility attribution is more challenging for autonomous drones than remote-controlled ones, this did not significantly impact support for LAWS. Variables measuring a difference in legal accountability or moral responsibility between autonomous and remote-controlled drones proved statistically insignificant as predictors of “LAWS preference.”Footnote87

On balance, we found no evidence in support of hypothesis 2. Informing the participants about the presence of accountable military officers in the chain of command did not increase the support for LAWS. Furthermore, the infrequent mention of responsibility concerns in the responses to the open-ended question suggests that the issue of holding someone accountable for civilian fatalities caused by autonomous drones does not automatically come to mind for ordinary citizens. However, our null findings should not be interpreted as evidence that the issue of responsibility is completely irrelevant to the public. When asked whether it would be more difficult to hold someone legally accountable or morally responsible for civilian deaths caused by remote-controlled or autonomous drones, most people recognize that “killer robots” may pose greater challenges. Nevertheless, individuals who hold such beliefs are still not more or less likely to support the hypothetical use of LAWS than those who do not.

Human Dignity

Finally, to test hypothesis 3, we turn to the results of the follow-up survey, utilizing the “human dignity concern” measure as a predictor of “LAWS preference.” We were able to follow up with 836 out of 999 participants who partook in the main experiment.Footnote88 shows a coefficient plot for three logistic regression models. In Model 1, we use our variable “human dignity concern” as the predictor. In Model 2, we control for several socio-demographic variables and political orientation. In Model 3, we additionally control for “attitudes toward robots” and “approval of drone strikes.”

Figure 6. Logistic regression of “LAWS preference”.

Note: Results of the logistic regression. 95% CIs. Model 1 N = 836. Model 2 N = 826. Model 3 N = 826. Lower N is due to missing observations for certain demographic variables. See Appendix 11 for full results and robustness checks.

Figure 6. Logistic regression of “LAWS preference”.Note: Results of the logistic regression. 95% CIs. Model 1 N = 836. Model 2 N = 826. Model 3 N = 826. Lower N is due to missing observations for certain demographic variables. See Appendix 11 for full results and robustness checks.

The results reveal that the “human dignity concern” variable attains a statistically significant and negative association with “LAWS preference” across all models.Footnote89 On average, respondents who scored higher on the “human dignity concern” measure were more likely to prefer the remote-controlled drone than the autonomous one.Footnote90

To further evaluate the relative significance of human dignity compared to other factors, we ran a logistic regression of “LAWS preference” with an interaction term between the “human dignity concern” variable and our experimental groups (see Appendix 12). indicates that the relationship between “human dignity concern” and “LAWS preference” is conditional on the experimental treatment.Footnote91 These results reveal an important tradeoff in our participants’ choice: When the risk of target misidentification is even modestly higher for the remote-controlled drone relative to the autonomous drone, on balance, individuals appear to be willing to sideline potential concerns about the inherently undignified nature of automated killing.

Figure 7. Adjusted predictions of human dignity concern by experimental group.

Note: Interaction plot based on the results of a logistic regression. 95% CIs. N = 679. Lower N is due to the exclusion of our “equal risk + responsibility” treatment for better interpretability. See Appendix 12 for full results.

Figure 7. Adjusted predictions of human dignity concern by experimental group.Note: Interaction plot based on the results of a logistic regression. 95% CIs. N = 679. Lower N is due to the exclusion of our “equal risk + responsibility” treatment for better interpretability. See Appendix 12 for full results.

Despite these findings, it is still possible that concerns about the inherent unethicality of “killer robots” drive the aversion to LAWS when all else is equal. To investigate this possibility, we examine the answers to our open-ended question in the control group. The write-in responses to our open-ended question reveal that at least some of our participants preferred the commander to use a remote-controlled drone because they believed, as a matter of principle, that lethal decision-making should have a human origin. In some cases, the participants also expressed concerns about the absence of distinctly human qualities, such as compassion, in the process of algorithmic decision-making. Nevertheless, we found no substantiated evidence to indicate that participants directly connected their rationale to the concept of human dignity or the undignified nature of automated killing:

“If the drone is killing people then it needs to be done by human hands.”

“[…] With a computer doing it, they don’t feel emotion and wouldn’t care who the target was or wasn’t.”

“I would rather have a human being involved. Humans have compassion, computers don’t.”

“[The autonomous drone] is more likely to kill innocent civilians than a drone under the control of a human that can make the moral choice not to fire.”Footnote92

Overall, these responses reveal that while some participants were concerned about the unethicality of ceding the decisions to kill to algorithms, they did not necessarily think of such concerns in terms of violations of human dignity per se. This claim is further supported by the results of our last survey on the perceived differences between remote-controlled and autonomous drones (see ). Although a significant number of respondents (41%) indicated that being killed by a computer program in the autonomous drone is less ethical than being killed by a human piloting the remote-controlled drone, the vast majority of respondents (71%) saw no difference between the two types of drones when it comes to human dignity.Footnote93 This finding suggests that our participants perceived the ethical aspects of LAWS use as disconnected from the issue of human dignity.

Figure 8. Perceived differences between remote-controlled and autonomous drones.

Note: N = 1,037. The figure presents the aggregated results for all experimental treatments. See Appendix 14 for full results.

Figure 8. Perceived differences between remote-controlled and autonomous drones.Note: N = 1,037. The figure presents the aggregated results for all experimental treatments. See Appendix 14 for full results.

Overall, our results provide only very limited evidence in support of hypothesis 3. While we found that individuals who were sensitive to violations of human dignity exhibited a greater opposition to LAWS use on average, these attitudes were still significantly contingent on different factors, particularly the risk of target misidentification. When presented with scenarios where using an autonomous drone carries a lower risk of hitting the wrong target compared to using a remote-controlled drone, their aversion to LAWS was less pronounced. Furthermore, the open-ended responses in the control group suggest that while the all-else-equal aversion to “killer robots” could be partially driven by concerns about the inherent immorality of automated killing our participants did not think about these concerns in terms of human dignity violations per se. As is evident from our survey on the perceived differences between remote-controlled and autonomous drones, the vast majority of individuals believe that being killed by a remote-controlled or autonomous drone is equally undignified.

Other Considerations

In addition to legal accountability, moral responsibility, human dignity, and ethicality, our survey on the perceived differences also inquired about other potential concerns related to the use of LAWS: risk of target misidentification, military effectiveness, costs, and force restraint.Footnote94 Using a 5-point scale, respondents indicated whether autonomous drones are better, worse, or equal to remote-controlled drones in each aspect. To assess the relative importance of these factors, we conducted a logistic regression of “LAWS preference,” using the eight measures of perceived differences as predictors.Footnote95

The results of this analysis are shown in . The findings indicate that our participants were more likely to prefer the autonomous drones over remove-controlled drones when they believed that doing so would result in a lower likelihood of target misidentification, a higher likelihood of accomplishing mission objectives, and that killing carried out by a computer program would be more ethical than killing performed by a human operator.

Figure 9. Logistic regression of “LAWS preference.”

Note: Results of the logistic regression. 95% CIs. Model 1 N = 162. Model 2 and Model 3 N = 160. Lower N is due to the exclusion of our “equal risk” and “equal risk + responsibility” treatments and the exclusion of participants who received the preference question before answering the questions on the perceived differences. See Appendix 14 for full results.

Figure 9. Logistic regression of “LAWS preference.”Note: Results of the logistic regression. 95% CIs. Model 1 N = 162. Model 2 and Model 3 N = 160. Lower N is due to the exclusion of our “equal risk” and “equal risk + responsibility” treatments and the exclusion of participants who received the preference question before answering the questions on the perceived differences. See Appendix 14 for full results.

These findings provide clues about possible mechanisms underlying the aversion to LAWS when all else is equal. Respondents opposed “killer robots” partially because they believed they would be more prone to making mistakes in target selection and engagement, less militarily effective, and more unethical than human-operated systems. While the absence of a statistically significant association for other variables does not necessarily mean that these factors do not matter to our participants, they certainly matter less, on average, when it comes to the choice between remote-controlled and autonomous drones in our scenario.

Conclusion

In this research article, we have addressed a question of both scholarly and policy interest: What factors affect the elasticity of public attitudes to lethal autonomous weapon systems (LAWS), or “killer robots” as they are commonly known? First, we found that while the baseline aversion to LAWS is remarkably high, public attitudes are also considerably elastic. The public opposition to these weapon systems is significantly contingent on their error-proneness relative to human-operated systems. Consequently, when the public is presented with scenarios in which the use of LAWS carries a lower risk of target misidentification compared to human-operated systems, we observe a rapid and significant shift in preference toward using “killer robots.” The frequent mention of concerns about the insufficient level of technological sophistication in open-ended responses, along with the results of the supplementary survey on risk estimates, further suggest that beliefs about the accident-prone nature of the technology could constitute one of the main mechanisms underlying the baseline aversion to LAWS.

Second, we found no evidence in support of the proposition that the explicit mentioning of command responsibility can alleviate opposition to LAWS. Finally, we found limited evidence in support of the idea that non-contingent concerns about the undignified nature of automated killing increase public opposition to these systems. On average, the respondents who scored higher on our “human dignity concern” measure were less likely to prefer LAWS. However, additional analysis reveals that many participants are willing to concede these concerns when the risk of target misidentification is even modestly higher for the remote-controlled drone compared to the autonomous drone. Overall, our findings indicate that among the three factors explored in this study, concerns related to the accident-prone nature of autonomous systems have the strongest association with attitudes of the U.S. public to the hypothetical use of LAWS.

Our study has distinct policy implications for the current international efforts to impose limitations or prohibitions on “killer robots.” The significant elasticity of the attitudes toward the military use of LAWS, demonstrated through our primes on the risk of target misidentification, implies that regulations may be unsustainable in the long run. If LAWS eventually prove to be more reliable at target discrimination than human-operated systems, the public will support them.

In our view, this does not automatically discount the value of regulatory measures. First, there is no guarantee that the technology will advance enough to outperform human decision-makers. Second, our findings regarding the public’s sensitivity to violations of human dignity, the write-in responses to the open-ended question, as well as the results of our survey on the perceived differences suggest that at least some part of the aversion still appears to be driven by non-contingent concerns. As argued elsewhere, arguments about the unpredictable and indiscriminate nature of “killer robots” may prove to be effective in mobilizing the public in support of regulatory measures in the short run.Footnote96 Such a framing strategy would remain vulnerable to shifts in public opinion caused by technological change. Instead, an approach combining arguments about the inherent immorality and accident-prone nature of LAWS in a balanced fashion appears to be the most advantageous for mobilizing broader segments of the public in support of regulation.

While illuminating with respect to some of the influential factors behind public attitudes to “killer robots,” our study is not without limitations. Our “human dignity concern” measure may not capture all aspects underlying the human dignity argument. Future studies could develop a more nuanced measure by incorporating other conceptually analogous situations. Furthermore, we have only examined the attitudes of the U.S. public, but attitudes to “killer robots” and the relative importance of the different factors affecting them may differ across countries. Lastly, the attitudes of specific groups, such as political elites and the military, may also differ substantially from the general public. These limitations of our study present potentially intriguing avenues for future research.

Supplemental material

Supplemental Material

Download PDF (1,015.8 KB)

Acknowledgements

We would like to express our gratitude to the editorial team, the anonymous reviewers, Anna Nadibaidze, Doreen Horschig, Halvard Buhaug, Lucas Tamayo Ruiz, Neil Renic, the participants of our 2023 ISA Annual Convention panel, as well as the attendees of research seminars at the Peace Research Center Prague and Institute for Peace Research and Security Policy at the University of Hamburg, for their valuable comments and suggestions on earlier drafts of this manuscript. We also gratefully acknowledge funding from the Charles University’s program PRIMUS/22/HUM/005 (Experimental Lab for International Security Studies – ELISS).

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Data Availability Statement

The data and materials that support the findings of this study are available in the Harvard Dataverse at https://doi.org/10.7910/DVN/8PDOGJ.

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

Univerzita Karlova v Praze.

Notes on contributors

Ondřej Rosendorf

Ondřej Rosendorf is a doctoral student at the Faculty of Social Sciences, Charles University, and a Researcher at the Institute for Peace Research and Security Policy at the University of Hamburg.

Michal Smetana

Michal Smetana is an Associate Professor at the Faculty of Social Sciences, Charles University, and the Head of the Peace Research Center Prague.

Marek Vranka

Marek Vranka is an Assistant Professor at the Faculty of Social Sciences, Charles University, and a Researcher at the Peace Research Center Prague.

Notes

1 For a discussion of the role of emerging technologies for international politics, see Michael C. Horowitz, “Do Emerging Military Technologies Matter for International Politics?” Annual Review of Political Science 23, no. 1 (May 2020): 386. On specific technologies, see, for example, Michael C. Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power,” Texas National Security Review 1, no. 3 (May 2018): 36–57; Kenneth Payne, “Artificial Intelligence: A Revolution in Strategic Affairs?” Survival 60, no. 5 (September 2018): 7–32; Benjamin M. Jensen, Christopher Whyte, and Scott Cuomo, “Algorithms at War: The Promise, Peril, and Limits of Artificial Intelligence,” International Studies Review 22, no. 3 (September 2020): 526–50; Avi Goldfarb and Jon R. Lindsay, “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War,” International Security 46, no. 3 (February 2022): 7–50.

2 Michael C. Horowitz, “When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability,” Journal of Strategic Studies 42, no. 6 (August 2019): 764–88.

3 See, for example, Elvira Rosert and Frank Sauer, “How (not) to Stop the Killer Robots: A Comparative Analysis of Humanitarian Disarmament Campaign Strategies,” Contemporary Security Policy 42, no. 1 (2021): 4–29; Ondrej Rosendorf, “Predictors of Support for a Ban on Killer Robots: Preventive Arms Control as an Anticipatory Response to Military Innovation,” Contemporary Security Policy 42, no. 1 (2021): 30–52.

4 Charli Carpenter, “How Do Americans Feel about Fully Autonomous Weapons?” Duck of Minerva, 19 June 2013, https://www.duckofminerva.com/2013/06/how-do-americans-feel-about-fully-autonomous-weapons.html; Kevin L. Young and Charli Carpenter, “Does Science Fiction Affect Political Fact? Yes and No: A Survey Experiment on ‘Killer Robots’,” International Studies Quarterly 62, no. 3 (August 2018): 562–76; Ipsos, “Global Survey Highlights Continued Opposition to Fully Autonomous Weapons,” 2 February 2021, https://www.ipsos.com/en-us/global-survey-highlights-continued-opposition-fully-autonomous-weapons; Ondrej Rosendorf, Michal Smetana, and Marek Vranka, “Autonomous Weapons and Ethical Judgments: Experimental Evidence on Attitudes toward the Military Use of ‘Killer Robots’,” Peace and Conflict 28, no. 2 (May 2022): 177–83.

5 Michael C. Horowitz, “Public Opinion and the Politics of the Killer Robots Debate,” Research & Politics 3, no. 1 (February 2016): 1–8.

6 See, for example, Ingvild Bode and Hendrik Huelss, “Autonomous Weapons Systems and Changing Norms in International Relations,” Review of International Studies 44, no. 3 (July 2018): 393–413; Rosert and Sauer, “How (not) to Stop the Killer Robots”; Rosendorf, “Predictors of Support for a Ban on Killer Robots”; Anna Nadibaidze, “Great Power Identity in Russia’s Position on Autonomous Weapons Systems,” Contemporary Security Policy 43, no. 3 (May 2022): 407–35.

7 See, for example, Daryl G. Press, Scott D. Sagan, and Benjamin A. Valentino, “Atomic Aversion: Experimental Evidence on Taboos, Traditions, and the Non-Use of Nuclear Weapons,” American Political Science Review 107, no. 1 (February 2013): 188–206; Scott D. Sagan and Benjamin A. Valentino, “Revisiting Hiroshima in Iran: What Americans Really Think about Using Nuclear Weapons and Killing Noncombatants,” International Security 42, no. 1 (July 2017): 41–79; Horowitz, “Public Opinion and the Politics of the Killer Robots Debate”; Janina Dill, Scott D. Sagan, and Benjamin A. Valentino, “Kettles of Hawks: Public Opinion on the Nuclear Taboo and Noncombatant Immunity in the United States, United Kingdom, France, and Israel,” Security Studies 31, no. 1 (February 2022): 1–31.

8 See also Horowitz, “Public Opinion and the Politics of the Killer Robots Debate”; Michael C. Horowitz and Sarah Maxey, “Morally Opposed? A Theory of Public Attitudes and Emerging Military Technologies,” unpublished manuscript, 28 May 2020, https://doi.org/10.2139/ssrn.3589503.

9 Horowitz, “Do Emerging Military Technologies Matter for International Politics?” 386.

10 Payne, “Artificial Intelligence,” 7–11. See also Jensen, Whyte, and Cuomo, “Algorithms at War”; Goldfarb and Lindsay, “Prediction and Judgment”; Antonio Calcara, Andrea Gilli, Mauro Gilli, Raffaele Marchetti, and Ivan Zaccagnini, “Why Drones Have not Revolutionized War: The Enduring Hider-Finder Competition in Air Warfare,” International Security 46, no. 4 (April 2022): 130–71.

11 See, for example, Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power”; Payne, “Artificial Intelligence”; Katarzyna Zysk, “Defence Innovation and the 4th Industrial Revolution in Russia,” Journal of Strategic Studies 44, no. 4 (December 2020): 543–71; Elsa B. Kania, “Artificial Intelligence in China’s Revolution in Military Affairs,” Journal of Strategic Studies 44, no. 4. (May 2021): 515–42.

12 ICRC, Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons (Versoix: ICRC, 2016), 31.

13 Vincent Boulanin and Maaike Verbruggen, Mapping the Development of Autonomy in Weapon Systems (Solna: SIPRI, 2017), 8; Paul Scharre and Michael C. Horowitz, “An Introduction to Autonomy in Weapon Systems,” Center for a New American Security, 13 February 2015, 7, https://www.cnas.org/publications/reports/an-introduction-to-autonomy-in-weapon-systems.

14 Michael C. Horowitz, “Why Words Matter: The Real World Consequences of Defining Autonomous Weapons Systems,” Temple International and Comparative Law Journal 30, no. 1 (Spring 2016): 90–91; Frank Sauer, “Stepping Back from the Brink: Why Multilateral Regulation of Autonomy in Weapons Systems Is Difficult, Yet Imperative and Feasible,” International Review of the Red Cross 102, no. 913 (April 2020): 240–41.

15 Ronald C. Arkin, “The Case for Ethical Autonomy in Unmanned Systems,” Journal of Military Ethics 9, no. 4 (December 2010): 333–34; Jürgen Altmann and Frank Sauer, “Autonomous Weapon Systems and Strategic Stability,” Survival 59, no. 5 (September 2017): 119; Horowitz, “When Speed Kills,” 769–70.

16 Altmann and Sauer, “Autonomous Weapon Systems and Strategic Stability,” 128–32; Horowitz, “When Speed Kills,” 781–83.

17 Noel E. Sharkey, “The Evitability of Autonomous Robot Warfare,” International Review of the Red Cross 94, no. 866 (June 2012): 787–99; ICRC, “ICRC Position on Autonomous Weapon Systems,” background paper, 12 May 2021, https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems.

18 Peter Asaro, “On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making,” International Review of the Red Cross 94, no. 886 (June 2012): 708–09; Christof Heyns, “Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns,” Human Rights Council, 9 April 2013, https://digitallibrary.un.org/record/755741.

19 See, for example, Ingvild Bode, “Norm-Making and the Global South: Attempts to Regulate Lethal Autonomous Weapons Systems,” Global Policy 10, no. 3 (June 2019): 359–64; Rosert and Sauer, “How (not) to Stop the Killer Robots,” 18–21; Nadibaidze, “Great Power Identity in Russia’s Position on Autonomous Weapons Systems,” 416–18.

20 Charli Carpenter, “A Better Path to a Treaty Banning ‘Killer Robots’ Has Just Been Cleared,” World Politics Review, 7 January 2022, https://www.worldpoliticsreview.com/a-better-path-to-a-treaty-banning-ai-weapons-killer-robots/; Ousman Noor, “Russia Leads an Assault on Progress at UN Discussions, the CCW Has Failed,” Stop Killer Robots, 4 August 2022, https://www.stopkillerrobots.org/news/russia-leads-an-assault-on-progress-at-un-discussions-the-ccw-has-failed/.

21 Young and Carpenter, “Does Science Fiction Affect Political Fact?” 562; Horowitz and Maxey, “Morally Opposed?” 2; Rosendorf, Smetana, and Vranka, “Autonomous Weapons and Ethical Judgments,” 178.

22 Human Rights Watch & International Human Rights Clinic, “Losing Humanity: The Case against Killer Robots,” report, 19 November 2012, 35, https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots; UNIDIR, “The Weaponization of Increasingly Autonomous Technologies: Considering Ethics and Social Values,” UNIDIR Resources No. 3, 30 March 2015, 6, https://www.unidir.org/publication/weaponization-increasingly-autonomous-technologies-considering-ethics-and-social-values; ICRC, “Ethics and Autonomous Weapon Systems: An Ethical Basis for Human Control?” ICRC report, 3 April 2018, 5–6, https://www.icrc.org/en/document/ethics-and-autonomous-weapon-systems-ethical-basis-human-control.

23 Noah Castelo, Maarten W. Bos, and Donald R. Lehmann, “Task-Dependent Algorithm Aversion,” Journal of Marketing Research 56, no. 5 (July 2019): 809–25; William M. Grove, David H. Zald, Boyd S. Lebow, Beth E. Snitz, and Chad Nelson, “Clinical Versus Mechanical Prediction: A Meta-Analysis,” Psychological Assessment 12, no. 1 (March 2000): 19–30.

24 For a discussion of error, see Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey, “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err,” Journal of Experimental Psychology: General 144, no. 1 (2015): 114–26. For a discussion of complexity, see Eric Bogert, Aaron Schecter, and Richard T. Watson, “Humans Rely More on Algorithms than Social Influence as a Task Becomes More Difficult,” Scientific Reports 11, no. 1 (April 2021): 1–9. On the type of task, see Castelo, Bos, and Lehmann, “Task-Dependent Algorithm Aversion.”

25 Chiara Longoni, Andrea Bonezzi, and Carey K. Morewedge, “Resistance to Medical Artificial Intelligence,” Journal of Consumer Research 46, no. 4 (December 2019): 629.

26 Carpenter, “How Do Americans Feel about Fully Autonomous Weapons?”

27 Ipsos, “Global Survey Highlights Continued Opposition to Fully Autonomous Weapons.”

28 For insights into the attitudes of researchers, see Baobao Zhang, Markus Anderljung, Lauren Kahn, Noemi Dreksler, Michael C. Horowitz, and Allan Dafoe, “Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers,” Journal of Artificial Intelligence Research 71 (August 2021): 591–666. For insights into the attitudes of local officials, see Michael C. Horowitz and Lauren Kahn, “What Influences Attitudes about Artificial Intelligence Adoption: Evidence from U.S. Local Officials,” PLoS ONE 16, no. 10 (October 2021): 1–20.

29 For a discussion of military effectiveness, see Michael C. Horowitz, “Public Opinion and the Politics of the Killer Robots Debate.” For a discussion of responsibility, see James I. Walsh, “Political Accountability and Autonomous Weapons,” Research & Politics 2, no. 4 (October 2015): 1–6; Rosendorf, Smetana, and Vranka, “Autonomous Weapons and Ethical Judgments.” On sci-fi literacy, see Young and Carpenter, “Does Science Fiction Affect Political Fact?”

30 Michael C. Horowitz, “Public Opinion and the Politics of the Killer Robots Debate.”

31 Duncan Purves, Ryan Jenkins, and Bradley J. Strawser, “Autonomous Machines, Moral Judgment, and Acting for the Right Reasons,” Ethical Theory and Moral Practice 18, no. 4 (January 2015): 851–72; Horowitz and Maxey, “Morally Opposed?”

32 Horowitz and Maxey, “Morally Opposed?” 3.

33 Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W. W. Norton & Company), chapter 9.

34 For a discussion of predictability and reliability, see UNIDIR, “The Weaponization of Increasingly Autonomous Technologies: Concerns, Characteristics and Definitional Approaches,” UNIDIR Resources No. 6, 9 November 2017, https://www.unidir.org/publication/weaponization-increasingly-autonomous-technologies-concerns-characteristics-and; Heather M. Roff and David Danks, “’Trust but Verify’: The Difficulty of Trusting Autonomous Weapons Systems,” Journal of Military Ethics 17, no. 1 (June 2018): 2–20. On “robocalyptic” imaginaries, see Young and Carpenter, “Does Science Fiction Affect Political Fact?”

35 Ipsos, “Global Survey Highlights Continued Opposition to Fully Autonomous Weapons.”

36 Robert R. Hoffman, Timothy M. Cullen, and John K. Hawley, “The Myths and Costs of Autonomous Weapon Systems,” Bulletin of the Atomic Scientists 72, no. 4 (June 2016): 248–49.

37 Daniele Amoroso and Guglielmo Tamburrini, “Toward a Normative Model of Meaningful Human Control over Weapons Systems,” Ethics & International Affairs 35, no. 2 (Summer 2021): 252–53.

38 Michael C. Horowitz, Lauren Kahn, and Laura Resnick Samotin, “A Force for the Future: A High-Reward, Low-Risk Approach to AI Military Innovation,” Foreign Affairs 101, no. 3 (April 2022): 162; Sauer, “Stepping Back from the Brink,” 249.

39 See, for example, Sharkey, “The Evitability of Autonomous Robot Warfare”; Heyns, “Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns”; Robert Sparrow, “Twenty Seconds to Comply: Autonomous Weapon Systems and the Recognition of Surrender,” International Law Studies 91 (October 2015): 699–728; Michael C. Horowitz, “The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons,” Daedalus 145, no. 4 (September 2016): 25–36; Elvira Rosert and Frank Sauer, “Prohibiting Autonomous Weapons: Put Human Dignity First,” Global Policy 10, no. 3 (July 2019): 370–75.

40 Marcus Schulzke, “Robots as Weapons in Just Wars,” Philosophy & Technology 24, no. 3 (April 2011): 300–01; Heyns, “Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns,” 13.

41 Garry Young, “On the Indignity of Killer Robots,” Ethics and Information Technology 23, no. 3 (April 2021): 473–74.

42 Ronald C. Arkin, “The Case for Ethical Autonomy in Unmanned Systems,” 333–34.

43 Dietvorst, Simmons, and Massey, “Algorithm Aversion.”

44 Janina Dill and Livia I. Schubiger, “Attitudes toward the Use of Force: Instrumental Imperatives, Moral Principles, and International Law,” American Journal of Political Science 65, no. 3 (June 2021): 612–33.

45 See, for example, Andreas Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology 6, no. 3 (September 2004): 175–83; Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (March 2007): 62–77.

46 Asaro, “On Banning Autonomous Weapon Systems,” 693; Daniele Amoroso and Benedatta Giordano, “Who Is to Blame for Autonomous Weapons Systems’ Misdoings?” in Use and Misuse of New Technologies: Contemporary Challenges in International and European Law, eds. Elena Carpanelli and Nicole Lazzerini (Cham: Springer, 2019), 213–15.

47 See, for example, Sparrow, “Killer Robots.”

48 Michael N. Schmitt, “Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics,” Harvard National Security Journal 4 (February 2013): 33–34.

49 Lode Lauwaert, “Artificial Intelligence and Responsibility,” AI & Society 36, no. 3 (January 2021): 1004; Isaac Taylor, “Who Is Responsible for Killer Robots? Autonomous Weapons, Group Agency, and the Military-Industrial Complex,” Journal of Applied Philosophy 38, no. 2 (May 2021): 322.

50 Marcus Schulzke, “Autonomous Weapons and Distributed Responsibility,” Philosophy & Technology 26, no. 2 (June 2013): 204.

51 Schulzke, “Autonomous Weapons and Distributed Responsibility,” 204, 211; Michael Robillard, “No Such Thing as Killer Robots,” Journal of Applied Philosophy 35, no. 4 (November 2018): 709.

52 Ipsos, “Global Survey Highlights Continued Opposition to Fully Autonomous Weapons.”

53 See, for example, Heyns, “Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns”; Christof Heyns, “Human Rights and the Use of Autonomous Weapons Systems (AWS) During Domestic Law Enforcement,” Human Rights Quarterly 38, no. 2 (May 2016): 350–78; Amanda Sharkey, “Autonomous Weapons Systems, Killer Robots and Human Dignity,” Ethics and Information Technology 21, no. 2 (June 2019): 75–87; Rosert and Sauer, “Prohibiting Autonomous Weapons.”

54 Heyns, “Human Rights and the Use of Autonomous Weapons Systems (AWS) During Domestic Law Enforcement,” 11.

55 Rosert and Sauer, “Prohibiting Autonomous Weapons,” 372.

56 ICRC, “ICRC Position on Autonomous Weapon Systems,” 8.

57 Sauer, “Stepping Back from the Brink,” 254–55.

58 Christof Heyns, “Autonomous Weapons in Armed Conflict and the Right to a Dignified Life: An African Perspective,” South African Journal on Human Rights 33, no. 1 (February 2017): 49.

59 Asaro, “On Banning Autonomous Weapon Systems,” 708–09; Heyns, “Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns,” 17.

60 Sharkey, “Autonomous Weapons Systems, Killer Robots and Human Dignity,” 79–80.

61 Schmitt, “Autonomous Weapon Systems and International Humanitarian Law”; ICRC, “Ethics and Autonomous Weapon Systems,” 11.

62 Dieter Birnbacher, “Are Autonomous Weapons Systems a Threat to Human Dignity?” in Autonomous Weapons Systems: Law, Ethics, Policy, eds. Nehal C. Bhuta, Susanne Beck, Robin Geiβ, Hin-Yan Liu, and Claus Kreβ, (Cambridge: Cambridge University Press, 2016), 120; Horowitz, “The Ethics & Morality of Robotic Warfare,” 32.

63 Deane-Peter Baker, “The Awkwardness of the Dignity Objection to Autonomous Weapons,” Strategy Bridge, 6 December 2018, https://thestrategybridge.org/the-bridge/2018/12/6/the-awkwardness-of-the-dignity-objection-to-autonomous-weapons; Sharkey, “Autonomous Weapons Systems, Killer Robots and Human Dignity,” 79–80.

64 Ipsos, “Global Survey Highlights Continued Opposition to Fully Autonomous Weapons.”

65 See Appendix 1 for a full description of the scenario and all survey items. A detailed discussion of the ethical considerations associated with this study is provided in Appendix 2.

66 We acknowledge that asking about drone strikes approval could make some respondents believe that supporting remote-controlled strikes over LAWS is the socially desirable answer. However, the type of drone in this question was left unspecified, and participants were informed about the commander’s choice between the remote-controlled and autonomous drone on the same page. Furthermore, evidence from existing meta-studies indicates that online surveys are less susceptible to social desirability bias than in-person or telephone interviews. See Marcella K. Jones, Liviana Calzavara, Dan Allman, Catherine A. Worthington, Mark Tyndall, and James Iveniuk, “A Comparison of Web and Telephone Responses From a National HIV and AIDS Survey,” JMIR Public Health Surveill 2, no. 2 (July 2016).

67 ICRC, Autonomous Weapon Systems, 31; Boulanin and Verbruggen, Mapping the Development of Autonomy in Weapon Systems, 8.

68 Bureau of Investigative Journalism, “Drone Warfare,” https://www.thebureauinvestigates.com/projects/drone-war.

69 Our respondents also had the option to choose “neither.” However, participants who selected this option were subsequently forced to choose one of the drone options. Appendix 15 contains the results of an alternative analysis using an ordinal dependent variable with the intermediate “neither” category.

70 Press, Sagan, and Valentino, “Atomic Aversion”; Brian C. Rathbun and Rachel Stein, “Greater Goods: Morality and Attitudes toward the Use of Nuclear Weapons,” Journal of Conflict Resolution 64, no. 5 (May 2020): 787–816; Dill, Sagan, and Valentino, “Kettles of Hawks”; Michal Smetana and Michal Onderco, “From Moscow With a Mushroom Cloud? Russian Public Attitudes to the Use of Nuclear Weapons in a Conflict With NATO,” Journal of Conflict Resolution 67, no. 2–3 (February–March 2023): 183–209.

71 Following Aronow, Baron, and Pinson, we did not exclude the participants who failed the manipulation check. See Peter M. Aronow, Jonathan Baron, and Lauren Pinson, “A Note on Dropping Experimental Subjects who Fail a Manipulation Check,” Political Analysis 27, no. 4 (May 2019): 572–89. The results of the analysis after excluding those participants are in Appendix 7, 9, and 11.

72 See supplemental materials for Horowitz, “Public Opinion and the Politics of the Killer Robots Debate.”

73 Charli Carpenter, Alexander H. Montgomery, and Alexandria Nylen, “Braking Bad? How Survey Experiments Prime Americans for War Crimes,” Perspectives on Politics 19, no. 3 (September 2021): 912–24.

74 On Prolific as a survey tool, see Eyal Peer, Laura Brandimarte, Sonam Samat, and Alessandro Acquisti, “Beyond the Turk: Alternative Platforms for Crowdsourcing Behavioral Research,” Journal of Experimental Social Psychology 70 (May 2017): 153–63; Stefan Palan and Christian Schitter, “Prolific.ac—A Subject Pool for Online Experiments,” Journal of Behavioral and Experimental Finance 17 (March 2018): 22–27.

75 See, for example, Peer, Brandimarte, Samat, and Acquisti, “Beyond the Turk.”

76 In our sample, 63% of participants held a university degree, compared to 48% reported by the United States Census Bureau in February 2022. See United States Census Bureau, “Census Bureau Releases New Educational Attainment Data,” 24 February 2022, https://www.census.gov/newsroom/press-releases/2022/educational-attainment.html. Additionally, the distribution of party affiliation in our sample was 33% Republicans, 34% Democrats, and 33% Independents, compared to the Gallup poll from February 2023, which reported 27% Republicans, 28% Democrats, and 44% Independents. See Gallup, “Party Affiliation,” 1–23 February 2023, https://news.gallup.com/poll/15370/party-affiliation.aspx. The results of the analysis with sampling weights appear in Appendix 7, 9, and 11. Appendix 6 provides the descriptive statistics of our sample.

77 See Appendix 1 for a full description of the analogous scenario and all survey items.

78 See Appendix 3 for all survey items.

79 See Appendix 4 for all survey items. The follow-up survey on perceived differences between remote controlled and autonomous drones was preregistered using the Open Science Framework (see Appendix 5).

80 In Appendix 7, we show that these results are robust to the inclusion of controls and the use of different estimation techniques, but not to the exclusion of participants who failed the manipulation check or the use of sampling weights.

81 t(299) = –12.8, p < 0.001. See Appendix 8.

82 OR = 1.036, p < 0.001. See Appendix 8.

83 OR = 1.219, p = 0.484. In Appendix 9, we show that these findings are robust to the inclusion of controls, the exclusion of participants who failed the manipulation check, and the use of alternative estimation techniques and sampling weights. Since the experimental part of our additional survey on the perceived differences included the two conditions from the main experiment, we were able to repeat the analysis with a larger sample. In Appendix 10, we show that the null finding holds.

84 Results of a paired t-test indicate that there was no statistically significant difference between the measurement of legal accountability and moral responsibility (t(1036) = 0.669, p = 0.504).

85 Only 8 out of 216 respondents in the control group mentioned this type of concern.

86 Since the experimental part of this survey included the “equal risk + responsibility” treatment, we conducted an ordinal logistic regression to analyze the relationship between the perceived differences in legal accountability and moral responsibility, on the one hand, and the “equal risk + responsibility” treatment, on the other hand. The results indicate that the responsibility prime had no statistically significant effect on the perceived differences in legal accountability and moral responsibility (see Appendix 10).

87 OR = 0.991, p = 0.919 for legal accountability; OR = 0.979, p = 0.806 for moral responsibility. These findings hold even when accounting for other factors. See Appendix 10.

88 Older respondents were slightly more likely to participate in this second wave. See Appendix 17 for an overview of survey attrition.

89 p < 0.05 in Model 1 and 3, and p < 0.01 in Model 2.

90 In Appendix 11, we show that the results are robust to the exclusion of participants who failed the manipulation check, use of alternative estimation techniques, and inclusion of controls for our experimental treatments, but not to the use of sampling weights. This could be attributed to survey attrition, as there was a slight underrepresentation of younger participants.

91 The results, available in Appendix 12, reveal that the “human dignity concern” fails to reach statistical significance in the “unequal risk” and “highly unequal risk” groups.

92 Of particular note, neither the term “dignity” itself nor any references to the treatment of humans as mere objects were mentioned in the write-in responses to the open-ended question.

93 In Appendix 13 and 14, we further show that those who believed that being killed by LAWS is less ethical were more likely to prefer remote-controlled drones, whereas those who believed that being killed by LAWS is more undignified were neither more nor less likely to prefer them.

94 The risk of target misidentification pertains to the probability of mistakenly identifying civilians as targets. Military effectiveness relates to the likelihood of achieving mission objectives. Costs refer to the degree of expense incurred. Force restraint refers to the extent to which decision-makers will feel constrained in using military force in future situations.

95 This analysis focused solely on the respondents from the control group who answered the preference question after expressing their opinion on the perceived differences. To facilitate interpretation, we recoded the risk of target misidentification, costs, and ethicality such that higher values on all eight measures indicate a belief that autonomous drones are better in this aspect.

96 Rosert and Sauer, “Prohibiting Autonomous Weapons,” 372; Rosert and Sauer, “How (not) to Stop the Killer Robots,” 22.