3,801
Views
0
CrossRef citations to date
0
Altmetric
Strategic Stability in the 21st Century

The Impact of AI on Strategic Stability is What States Make of It: Comparing US and Russian Discourses

ORCID Icon &
Pages 47-67 | Received 04 Oct 2022, Accepted 17 Apr 2023, Published online: 26 Apr 2023

ABSTRACT

Military applications of artificial intelligence (AI) are said to impact strategic stability, broadly defined as the absence of incentives for armed conflict between nuclear powers. While previous research explores the potential implications of AI for nuclear deterrence based on technical characteristics, little attention has been dedicated to understanding how policymakers of nuclear powers conceive of AI technologies and their impacts. This paper argues that the relationship between AI and strategic stability is not only given through the technical nature of AI, but also constructed by policymakers’ beliefs about these technologies and other states’ intentions to use them. Adopting a constructivist perspective, we investigate how decision-makers from the United States and Russia talk about military AI by analyzing US and Russian official discourses from 2014–2023 and 2017-2023, respectively. We conclude that both sides have constructed a threat out of their perceived competitors’ AI capabilities, reflecting their broader perspectives of strategic stability, as well as the social context characterized by distrust and feelings of competition. Their discourses fuel a cycle of misperceptions which could be addressed via confidence building measures. However, this competitive cycle is unlikely to improve due to ongoing tensions following the Russian invasion of Ukraine.

Introduction

Armed forces of many states are integrating artificial intelligence (AI) technologies into military command and weapon systems. A rapidly growing literature is exploring the implications of military AI for international security and warfare, including in the nuclear sphere. Among others, experts debate the impact of AI on strategic stability, broadly defined as “a state of affairs in which countries are confident that their adversaries would not be able to undermine their nuclear deterrent capability using nuclear, conventional, cyber or other unconventional means” (Boulanin Citation2019, 4). Some studies portray military AI as a threat to strategic stability, highlighting issues of uncertainty, risks of inadvertent escalation, and a potential shift in the nuclear balance (Hruby and Miller Citation2021; Johnson Citation2020; Rickli Citation2019). Others examine the benefits associated with military applications of AI, noting, for instance, potential improvements in target identification, early warning systems, or verification in arms control (Sokova Citation2020, 296; Cox and Williams Citation2021).

This article contributes to this debate by adopting a constructivist perspective where the impact of AI on strategic stability is “what states make of it” (Wendt Citation1992). The impact of technologies does not only depend on their technical characteristics, but also on policymakers’ perceptions of these technologies. Innovation, especially in the defense sphere, is not considered an independent or intervening variable which inevitably leads to a certain outcome in the global balance of power, as it is often assumed in rationalist-based accounts (Drezner Citation2019, 287). Rather, a technology’s implications are formed via a social process and interactions between actors “that view the potential of a given technology differently, in a manner that corresponds to their beliefs, preferences, and vision of what that technology can do for them” (Adamsky Citation2010, 7). These perceptions of technological capabilities shape phenomena in international security such as nuclear deterrence. Deterrence is formed via social constructions, “ideas and knowledge”, including in relation to technology, and their interpretations (Lupovici Citation2010, 715–16). While material factors are not discarded, more attention should be paid to the social processes surrounding technological development and actors’ understandings of these changes.

We investigate how the impact of AI is shaped by policymakers’ understandings and conceptions of these technologies, as well as the implications of this social process for arms control. This is important at a time of increased confrontation and distrust between the states with the two largest nuclear arsenals: the United States and Russia. The United States possesses approximately 3,708 deployed and stored warheads (Kristensen and Korda Citation2023), while Russia has approximately 4,447 (Kristensen and Korda Citation2022a, Citation2022b).Footnote1 Together they hold most of the world’s nuclear warheads, raising the stakes of a potential conflict. US and Russian leaderships are visibly interested in integrating AI into the military domain, both in conventional and nuclear forces (Boulanin et al. Citation2020). Understanding how their officials perceive technological change and its relationship with strategic stability is pressing, as fears about nuclear escalation have intensified following Russia’s full-scale invasion of Ukraine and its continuous nuclear threats (Bollfrass and Herzog Citation2022). The potential consequences stemming from the development of military AI by the United States and Russia include misperceptions of each other’s capabilities and intentions, as well as inadvertent escalation. Considering the current soaring tensions, these are notable risks for international security.

The article argues that examining how political and military officials in the United States and Russia conceive of AI technologies allows gaining an in-depth understanding of the relationship between AI and strategic stability, beyond the analysis of the technologies themselves. It seeks to address the following questions: How do policymakers in the United States and Russia talk about AI and its impacts on strategic stability? What are the policy implications of these discourses? The analysis in the respective sections on US and Russian discourses is guided by the following sub-questions: How does the US/Russian leadership talk about AI technologies? Has the official discourse evolved? The paper is structured as follows. First, it reviews the concept of strategic stability and its historical development, discusses literature about AI and strategic stability, and explores the added value of a constructivist approach to understand how policymakers talk about this issue. The second and third sections present the analysis of US and Russian discourses. The fourth section compares these discourses and reflects on the implications of their competitive dynamics for arms control and global regulation, considering developments following Russia’s invasion of Ukraine.

Strategic Stability in a Historical Perspective

The definition of strategic stability remains debated (Rubin and Stulberg Citation2018). The term is often associated with the lack of incentives for the two Cold War-era major powers, the United States and the Soviet Union, to use nuclear weapons first (Acton Citation2013, 117). Potential nuclear retaliation in case of a pre-emptive nuclear attack ensured stability between the two superpowers, guaranteeing that neither would have acted due to secure second-strike capabilities. Maintaining this balance helped avoiding nuclear war and mutual destruction (Schelling Citation1958; Wohlstetter Citation1959). In 1990, both sides defined strategic stability as removing “incentives for a nuclear first strike” by “reducing the concentration of warheads on strategic delivery vehicles, and on giving priority to highly survivable systems” (US and USSR Citation1990). In the following decades, the United States and Russia have tried to build upon this agreed concept by engaging in bilateral arms control negotiations (Arbatov Citation2021).

In the post-Cold War period, numerous scholars have called on re-defining the concept of strategic stability to reflect the complexity of the twenty-first century security environment (Arbatov and Dvorkin Citation2011; Cimbala and Scouras Citation2002, chap. 2; Colby Citation2013; Garcia Citation2017; Karaganov and Suslov Citation2019). With technological developments in areas including but not limited to anti-satellite weapon systems, hypersonic missiles, cybersecurity, and biotechnology, experts have argued to broaden strategic stability in a way that “acknowledges that emerging technologies might increase the vulnerability of nuclear arsenals” and affect nuclear decision-making, strategic calculations, and conflict escalation (Favaro Citation2022; see also Acton Citation2017; Chyba Citation2020; Kosal Citation2020). The original definition has been said to be “no longer sufficient” to reflect the need to prevent “a military conflict between any nuclear weapon states” and to study how technologies affect “incentives to use nuclear weapons at any level” (Trenin Citation2019, 7). Strategic stability could therefore be better defined as a situation where no nuclear state has an incentive to use military force and change the status quo via conflict (Ford Citation2020, 1).

AI and Strategic Stability

AI is an umbrella term which can refer to “a wide set of computational techniques that allow computers and robots to solve complex, seemingly abstract problems that had previously yielded only to human cognition” (Boulanin Citation2019, 4). AI technologies are general-purpose and dual-use, as they can be applied for both peaceful and harmful purposes (Lupovici Citation2021). Their impact on strategic stability is predicted to be “both widespread and widely varying by application” (Chyba Citation2020, 161). All states in possession of nuclear weapons demonstrate a clear interest in modernizing their armed forces by integrating AI (Boulanin et al. Citation2020, chap. 3). Although many of these plans are kept secret, recent developments have generated debates about the potential opportunities and risks of military AI (Payne Citation2021; Scharre Citation2018).

AI technologies are associated with potential advantages and stabilizing effects in the nuclear sphere, particularly in arms control. Researchers and practitioners argue that AI can enhance, among others, data processing, object recognition, satellite or drone image analysis, video surveillance, tracking of movements, and sensor coordination (IAEA Citation2022 77–78; Schörnig Citation2022). Advances in AI could improve early warning systems, monitoring of movements or facilities, as well as verification of arms control agreements, which “relies on the interpretation and evaluation of large sets of data” (Baldus Citation2022, 111). Natural language processing and large language models could refine translation and analysis of texts in diplomatic processes (Favaro Citation2023).

However, the militarization of AI also entails risks to strategic stability. The increasing speed of decision-making based on algorithms, automation bias, and over-trust in automated (often complex) systems could increase the risk of unintended errors (Johnson Citation2022; Rautenbach Citation2023). Experts point to previous failures of automated “dead hand” early warning systems, arguing that AI is “brittle” and could result in malfunctions which affect states’ confidence in their second-strike capabilities (Horowitz Citation2019; Johnson Citation2020; Kallenborn Citation2022). Studies also highlight risks associated with AI-enabled remotely controlled nuclear delivery platforms (Boulanin et al. Citation2020), spoofing and adversarial attacks, as well as the role of AI in cyber-attacks, including on nuclear facilities (Johnson Citation2019a; Sharikov Citation2018).

The integration of AI into conventional weapon systems and military command is predicted to create instability by leading to miscalculations, misperceptions and escalation which could change actors’ incentives to engage in military action (Altmann and Sauer Citation2017; Geist and Lohn Citation2018; Johnson Citation2019b; Kozyulin Citation2019). Overall, existing literature on military AI tends to portray these trends as a risk which “deteriorates” (Sauer Citation2020, 251) or “reduces” strategic stability by introducing new offensive threats, increasing unpredictability and the speed of warfare, as well as encouraging a competition for AI (Rickli Citation2019, 96–98). The United States and Russia possess the world’s largest nuclear arsenals and are both pursuing further integration of AI applications into their armed forces, including nuclear command, control, and communications (NC3), in a non-transparent way. The competitive dynamics of US and Russian discourses form a “race to the bottom” in military AI, increasing the risks of accidents and unintended decisions which could affect strategic stability (Scharre Citation2021).

What Do Policymakers Make of AI and Strategic Stability?

Most studies focus on technical characteristics of AI such as the bias or incompleteness of data used in algorithms, the latter’s unpredictability, or the vulnerability to hacking. Others have explored strategic and theoretical implications of technical developments, including in areas such as military strategy and decision-making, psychological impacts, escalation management, and the broader balance of power (Johnson Citation2022, 338; see also Johnson Citation2023; Payne Citation2021; Scharre Citation2023).

However, the literature lacks an in-depth understanding of major nuclear powers’ discourses about AI technologies. While technical features are important to examine to understand how military AI relates to international security, this article argues that decision-makers’ perceptions of these technologies and beliefs about their meaning should also be closely studied (Hymans Citation2006). We understand technological impact as a “deeply political phenomenon as technology becomes woven into the global social fabric and is adapted to context-specific needs” (Fritsch Citation2014, 117). We advocate for a constructivist approach to better understand the ongoing interaction between the development of AI technologies and strategic stability. Among the existing research on AI and strategic stability, few studies examine how official discourses frame AI and what these perceptions mean for strategic stability. This study addresses this gap by analyzing US and Russian policymakers’ statements and exploring the policy implications of these perceptions for arms control. It highlights that the beliefs and perceptions expressed by officials are as important to analyze as states’ technological capabilities.

As Tertrais points out, technology “has been rather neutral overall – neither favouring the attacker nor the defender, neither favouring stability or instability” (Tertrais Citation2022, 2). The impact of AI on strategic stability is not a linear development caused by AI itself or its characteristics. It is also formed by how governments decide to use technologies (Horowitz et al. Citation2018, 4) and how they interpret their opponents’ uses of these technologies within the global social context (Lupovici Citation2021). As other technologies, AI “does not exist in vacuum” and is not a “game-changer” in itself: its impact on strategic stability “will likely be determined as much (or more so) by a state’s perception of its functionality as its actual capability” (Johnson Citation2019c, 44, italics as in original). It is important to understand “how adversaries perceive each other’s AI capabilities” (Wachs Citation2021, 17; see also Geist and Lohn Citation2018, 1). Nuclear deterrence is also formed through views of others’ nuclear capabilities and intentions, rather than the number of warheads per se. Nuclear stability “depends on not only technical-military factors but also state perceptions and beliefs” (Logan Citation2022, 175). These perceptions play a crucial role in guiding political decisions concerning AI developments, their integration into the nuclear sphere, shifts in military doctrines, as well as overall conventional and nuclear postures. The impact of military AI is therefore also “what states make of it” (Wendt Citation1992).

Methodology

This article analyzes how US and Russian policymakers perceive AI and its impact on strategic stability, defined more broadly as the absence of incentives among nuclear powers to engage in a military conflict (Cimbala and Scouras Citation2002). For this purpose, we have conducted an interpretive analysis of official discourses, conceptualized as “the space where human beings make sense of the material world, where they attach a meaning to the world and where representation of the world become manifest” (Holzscheiter Citation2014, 144; see also Epstein Citation2010; Hansen Citation2006). Discourse analysis allows to engage with the ways in which “texts constitute the social world”, how actors construct and maintain a meaning of social reality via “specific artefacts of writing, speech or other representations from which we can infer broader elements of context, culture, politics and conflict” (Tatum Citation2018), 346). Following constructivist literature in international relations, which argues that “the objects of our knowledge are not independent of our interpretations and our language”, this analysis of US and Russian discourses assists in understanding how officials construct the social context through which strategic stability is formed (Adler Citation2013, 112–13).

We have collected documents, policies, and speeches delivered by the US and Russian leaderships. For the United States, the analysis explores the AI discourse from 2014, when the theme became more prominent in official strategies and statements, until February 2023. It considers the documents produced by the presidencies and their executive branches and the reports, policies and speeches developed by departments, committees, and policymakers.Footnote2 The respective section on Russia explores statements delivered by officials from 2017, when the topic of AI became more prominent in the official discourse, until February 2023.Footnote3 In both cases, the search for documents was conducted with the keyword “artificial intelligence”, with an emphasis on the following themes: the role of AI in the international system, military AI, as well as perceptions surrounding strategic stability.

The US Discourse

How Does the US Leadership Talk About AI Technologies?

The United States is the largest investor and leader in military AI innovation (Haner and Garcia Citation2019; Wyatt Citation2020). The US government is developing, testing, and fielding AI applications in intelligence, surveillance and reconnaissance, command and control, as well as weapon systems (Mori Citation2018; Kahn Citation2023). The importance of maintaining US leadership in military AI has been a major theme shaping its official discourse. Statements of all three presidencies examined highlight the need to pursue technological innovation to strengthen US power. Under the second Barack Obama presidency, officials began linking AI to global power dynamics, dedicating special attention to competitors’ advances in military technologies. In 2014, then Secretary of Defense Chuck Hagel warned about the potential impact of “disruptive technologies” on US military superiority. Hagel named Russia and China as potential competitors which have been “trying to close the technology gap”, emphasizing that these developments risk reducing the US military advantage and its “ability to project power” around the world (US Department of Defense Citation2014). In 2014 the US Department of Defense (DoD) also launched the Third Offset Strategy to sustain “American power projection over the coming decades” (Hagel Citation2014). In a 2016 speech on the Strategy, Deputy Secretary of Defense Robert Work expressed the need to increase “our margin of technological superiority that unquestionably has been eroding over the last 20 years”, and emphasized that advances in AI will be “the technological sauce of the Third Offset” (US Department of Defense Citation2016).

The US discourse has featured explicit references to perceived competing powers’, especially China’s and Russia’s, development of AI. In 2016, Work referred to both as competitors whose improved technological capabilities can lead to “crisis instability” and “undermine deterrence” (US Department of Defense Citation2016). Concerns about adversaries’ advances in military AI were also pivotal elements of the Donald Trump presidency’s documents. The 2017 National Security Strategy set out the goal of improving “understanding worldwide [science and technology] trends and how they are likely to influence – or undermine – American strategies and programs” (The White House Citation2017, 20). This included prioritizing emerging technologies such as AI to ensure the country’s “competitive advantage” and mitigate risks to national security deriving from perceived rivals’ uses of AI. Russia and China are named throughout the strategy as the main competitors challenging US leadership. Similarly, the 2020 National Strategy for Critical and Emerging Technologies highlights the necessity for the United States to lead in research and development (R&D) to advance its influence “in an era of great power competition” (President of the United States Citation2020, 1). The document calls for “meaningful action” to counter China’s efforts to “become the global leader” in science and technology and Russia’s “government-led … efforts on military and dual-use technologies, such as artificial intelligence” (1–2).

The Joe Biden presidency’s discourse has been reinforcing the importance of both developing AI and addressing rivals’ technological advances. Under the Biden administration, China has been framed as a more prominent competitor in AI than Russia. When advocating for Washington’s leading role in “the fierce strategic technology competition”, Secretary of State Anthony Blinken focused on Beijing, stating, “we know China is determined to become the world’s technology leader. And they have a well-resourced and comprehensive plan to achieve those ambitions” (US Department of State Citation2021). Meanwhile, the National Security Commission on Artificial Intelligence (NSCAI)’s 2021 Final Report warns about US rivals “integrating AI concepts and platforms to challenge the United States’ decades-long technology advantage” (NSCAI Citation2021, 2).

The document highlights China as a major threat, stating that Beijing aims to “offset US conventional military superiority” by exploiting AI (23). At the same time, it calls for the United States to “embrace the AI competition” with China (3) and for the private sector to be engaged in it. In the Commission’s view, Chinese companies’ potential technological advantage would “create the digital foundation for a geopolitical challenge to the United States and its allies” (26). The calling for the involvement of the private sector in this global contest is a core factor shaping the US discourse. Private companies and their technological investments are perceived to play a crucial role in ensuring the country’s status of AI global power. The 2022 National Security Strategy further emphasizes the need for global partnerships with an “allied techno-industrial base” to exploit the advantages stemming from AI and “safeguard our shared security, prosperity and values” (The White House Citation2022, 33).

Has the Official Discourse Evolved?

In contrast to the Obama presidency, the Trump and Biden administrations have more explicitly expressed concern about the potential impact of opponents’ military AI on strategic stability. The 2018 DoD’s Nuclear Posture Review (NPR) considers “unanticipated technological breakthroughs in the application of existing technologies” a factor of uncertainty in the nuclear sphere (Office of the Secretary of Defense Citation2018, 14). Similarly, the NSCAI report calls for seeking clear commitments from China and Russia on their implementation of AI in NC3, to avoid ambiguity on the role of human control in the potential employment of nuclear weapons (Citation2021, 98). This discourse aligns with a conventional view of strategic stability, focused on developments in the nuclear sphere.

The Biden administration has reiterated US concerns on the implementation of AI in NC3, while emphasizing its commitment to the human element in nuclear and military decision-making. The revised 2022 NPR states:

In all cases, the United States will maintain a “human in the loop” for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.

(US Department of Defense Citation2022, 13)

The NPR connects this statement to the need for mitigating the risk of “unintended nuclear escalation” by maintaining “rigorous technical and procedural safeguards” (13). Similarly, the 2023 Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy mentions, “States should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment” (US Department of State Citation2023).

At the same time, under presidents Trump and Biden, the AI competition has become more than just a matter of military and nuclear power. AI has also been increasingly framed as a contest of values between authoritarian and democratic regimes. For instance, the 2018 DoD’s AI strategy developed under the Trump presidency states that China’s and Russia’s investments in military AI threaten to “destabilize the free and open international order” (US Department of Defense Citation2019, 5). The framing of the AI competition as a value contest reflects a widened conceptualization of strategic stability, in which authoritarian regimes’ uses of AI are perceived as threats to democratic countries, as well as to the global order (Hine and Floridi Citation2022). Statements depict autocracies’ AI developments as hostile actions against democratic regimes and threats to the US-led liberal international order. The DoD’s strategy argues that US “adversaries and competitors are aggressively working to define the future of these powerful technologies according to their interests, values, and societal models” and warns that their efforts can “challenge our values and traditions with respect to human rights and individual liberties” (17). Trump’s 2019 Executive Order 13859 further aligns with this narrative by stating,

Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities.

(Executive Office of the President Citation2019, 3967)

The 2021 NSCAI report declares that “the AI competition is also a values competition”, adding,

China’s domestic use of AI is a chilling precedent for anyone around the world who cherishes individual liberty. Its employment of AI as a tool of repression and surveillance – at home and, increasingly, abroad – is a powerful counterpoint to how we believe AIshould be used. (2)

The use of AI by authoritarian regimes, specifically China, is thus portrayed as a precursor of hostile actions aimed at subverting other countries’ systems of governance.

The Russian Discourse

How Does the Russian Leadership Talk About AI Technologies?

AI became a visible strategic priority for the Russian leadership in the mid-to-late 2010s. The Russian official discourse first and foremost presents AI development as an area of competition between great powers. As President Vladimir Putin argued in 2019,

If someone can secure a monopoly in the field of AI … they will become the ruler of the world. It is no coincidence that many developed countries of the world have already adopted their action plans for the development of such technologies. And we of course, must ensure technological sovereignty in the sphere of AI.

(BBC News Russian Citation2019)

Russia’s path towards AI leadership, however, has been hampered by challenges such as a lack of specialists and investments, as well as reliance on foreign hardware (Nadibaidze Citation2022a). While acknowledging that Russia is not a leading AI player, officials express certainty that the country is able to catch up with others (President of Russia Citation2020a). First Deputy Minister of Defense Ruslan Tsalikov said in 2021, “The Russian Federation possesses significant potential for becoming one of the international leaders in the development and use of AI technologies” (RIA Novosti Citation2021b).

In the Russian view, integrating AI into the armed forces is an opportunity to gain strategic advantage in a setting of “high-tech” warfare. AI technologies are depicted as key to overhaul both conventional and nuclear forces (Petrov Citation2021). Former Deputy Prime Minister Yury Borisov argued that the development of AI “is necessary for the creation of promising models of weapons, military and special equipment”, adding that it is “of great importance for the defense and security of the country” (Government of Russia Citation2021). State officials have been emphasizing the perceived advantages of AI for the precision and effectiveness of weapon systems, as well as command and control, data analysis, surveillance, and reconnaissance. Putin said,

Today, success in many areas directly depends on the accuracy and speed of decision-making. And in the military field, during combat operations, such speed can literally consist of minutes, or even seconds … This is why we need to develop systems to support decision-making of commanders at all levels, especially at the tactical level, [and] to introduce AI technologies into these systems.

(RIA Novosti Citation2021a)

Borisov also suggested that integration of AI into weapon systems is an inevitable trend, arguing, “The world is changing, armament is changing, the nature of conflicts is changing … We need high-precision weapons. High-speed, high-precision, high-effective, resistant to various influences, and which cannot be intercepted” (Arsenal Otechestva Citation2018). The head of the Ministry of Defense department on AI development, Vasily Yelistratov, said that “the war of the future is a war of machines”, declaring that Russian so-called “high-precision” weapon systems integrate AI (TASS , Citation2022b, see also TASS Citation2023).

Russia also seeks to demonstrate and signal its self-attributed status of a great power in the international order. The Russian leadership and population closely associate the country’s display of military strength, especially nuclear weapons, with its historical great power status (Götz Citation2019, 819; Loukianova Fink and Oliker Citation2020). Defense Minister Sergey Shoigu said in 2021 that it was “necessary to ensure the introduction of AI technologies into weapon systems which determine the future image of the armed forces” (Petrov Citation2021). Official statements and state media have been promoting the Status-6 (Poseidon) nuclear-powered uncrewed underwater vehicle and the Burevestnik nuclear-powered cruise missile as symbols of its nuclear modernization and improved credibility (Zysk Citation2023).Footnote4

AI developments abroad are portrayed as security issues. During a meeting of the National Security Council in December 2020, Putin defined “neutralizing threats to [Russia’s] national security associated with the development of AI technologies for military purposes in the leading armed forces of the world” as a “very important, sensitive topic” (Izvestia Citation2020). Shoigu also stated,

We are carefully watching global trends not only [in] science and education development, we also watch technological development and what is implemented and being done in the armies of other countries. Of course, with the large-scale and rapid digitalization and programs related to AI, we are implementing corrections into our educational programs, we are preparing new specialists.

(Interfax Citation2021)

While China is seldom mentioned in a negative way, the United States and NATO, often called the “collective West”, are portrayed as sources of threat in the sphere of digital technologies. Moscow envisions AI as a “tool used to undermine Russia” and sees itself as a “target of the West” (Bendett Citation2023). Articles written by state officials in military-themed outlets portray the West as a malignant actor developing AI as a tool of foreign influence, while constraining Russia’s and China’s technological development (see Il’nitskyi and Losev Citation2019). This narrative has intensified following the sanctions imposed on Russia in 2022. Deputy Chairman of the National Security Council Dmitry Medvedev, for instance, claimed that the West is engaged in “hostile measures” such as cyberattacks, a technological blockade, and “luring [IT] specialists” from Russia (Interfax Citation2022).

Such discourses reflect the overall Russian perception of strategic stability, which highlights the importance of maintaining a general military, diplomatic, political, and technological balance between nuclear powers, especially Russia and the United States (Arbatov, Dvorkin, and Topychkanov Citation2017; Karaganov and Suslov Citation2019; Romashkina, Markov, and Stefanovich Citation2020). The US military technological development is framed as a threat to both Russian security and strategic stability (JRogers, Korda, and Kristensen Citation2022, 361). Foreign Minister Sergey Lavrov named “American plans to take weapons into space” as a factor which affects “strategic stability no less than nuclear weapons” (RIA Novosti Citation2018).Footnote5 Russia’s 2020 Nuclear Doctrine singles out the deployment of high-precision non-nuclear and hypersonic weapons, as well as uncrewed combat aerial vehicles, as one of the main military threats both to Russia and global nuclear deterrence (President of Russia Citation2020b, paras. 12–14).

Has the Official Discourse Evolved?

The themes of competition, modernization, great power status and securitization of US technological development have constantly featured in the Russian AI discourse. They have also become increasingly intertwined with the leadership’s belief that AI leadership is a matter of sovereignty and survival. Russia faces economic issues, a lack of hardware needed for AI development, cancellation of research cooperation with Western institutions, as well as a “brain drain” of IT specialists (Gorenburg et al. Citation2022). The leadership has pursued isolationist policies justified via the quest for technological sovereignty, especially since 2022 (Nadibaidze Citation2022b). Putin has placed AI at the core of this narrative. He noted that “Russia’s place in the world, our sovereignty, security and viability depend on the results we achieve [in the sphere of AI]” (President of Russia Citation2022a). The Russian Armed Forces’ Concept in the Development and Use of Weapons systems with AI Technologies, adopted in July 2022, also integrates technological sovereignty as a key principle. It states, “the necessary level of independence of the Russian Federation in the field of AI should be ensured, including through the predominant use of domestic AI technologies and AI-based solutions” (Russian Federation Citation2023, 2; see also Garant.ru Citation2019).

At the same time, Russia’s invasion of Ukraine has revealed a visible gap between its official discourse and capabilities (Bode et al. Citation2023, 6). According to the information available, the Russian military’s applications of AI and autonomy on the battlefield in Ukraine remain minimal (Bode and Nadibaidze Citation2022). At the end of 2022, Putin called for a more active use of AI “at all levels of [military] decision-making”, explaining, “As experience shows, including that of recent months, the weapons systems that operate quickly and almost automatically are the most effective ones” (President of Russia Citation2022b). However, this is unlikely to be a realistic objective.

US and Russian Discourses: Implications for Arms Control

How Do Policymakers in the United States and Russia Talk About AI and Strategic Stability?

Officials in the United States and Russia associate AI with an ongoing global competition. Both sides prioritize the integration of AI into their armed forces. The United States emphasizes the importance of maintaining its technological leadership, directly linking AI to national security. AI advances in China, and Russia to a lesser extent, are perceived as threats to the US’ leading role and military advantage, as well as the global promotion of liberal values. The Trump and Biden presidencies have more actively and explicitly framed the AI competition as both a strategic and values contest. These changes reflect the US’ broader conceptualization of strategic stability, characterized not only as a politico-military issue centred on arms control and disarmament, but also as a geopolitical competition between value systems. It features distrust towards authoritarian regimes with China in the lead, linking these states’ uses of technologies to the destabilization of global security.

In the Russian discourse, AI technologies are associated with the opportunity to modernize the armed forces and maintain Russia’s self-attributed great power status. The focus has been on catching up with the leaders, gaining strategic advantage, and framing the US’ development of AI and other technologies as a threat to Russian security. These trends have been reinforced since 2022, as the Russian leadership has been pursuing an isolationist course justified with the quest for technological sovereignty and openly stating that it considers itself at war with the West.Footnote6 Russian statements do not feature references to values, as Russia’s conception of strategic stability focuses on the broader politico-military equilibrium, especially between the United States and itself.

Following the constructivist perspective, security threats are “socially constructed” via discourses (Lipschutz Citation1995, 10). While materiality and technical characteristics matter, the interpretation of technologies and their impact is formed intersubjectively. This interpretation encourages similar responses, both in terms of discourses and policies (Lipschutz Citation1995). US and Russian discourses frame AI technologies as a threat, which in turn leads the other side, as well as other military AI developers such as China, to make similar moves and contribute to a “spiraling escalation” (Lupovici Citation2021, 263). Concerns surrounding AI are often not directly linked to technical characteristics, but to the ways that opponents are, would be, or could be using military AI. Officials in the United States and Russia display uncertainty about whether their rivals develop AI technologies for civilian or harmful purposes, and if for the latter, whether these are offensive or defensive capabilities.

For instance, the NSCAI report has securitized China’s and Russia’s ambiguity in integrating AI into NSCAI (Citation2021, 95). Meanwhile, President Putin called for “neutralizing threats” associated with AI developments abroad (Izvestia Citation2020). Their statements are also representative of the broader social context of distrust characterizing US-Russia arms control negotiations and diplomatic relations. Since the outbreak of Russia’s invasion of Ukraine, both states’ policymakers have reinforced these competitive dynamics, intertwining technological uncertainty with “intense political tensions” (Istomin Citation2023, 106). Such developments are not exclusive to the sphere of AI. As with other dual-use technologies, US and Russian official discourses on AI suggest racing dynamics, misperceptions of others’ capabilities, and a “cycle of securitization” which are exacerbated by the absence of regulation (Carrozza, Marsh, and Reichberg Citation2022, 31).

What are the Policy Implications of These Discourses?

The absence of international arms control in military AI constitutes a lack of tools to manage the securitization process and the (mis)perceptions outlined in our discourse analysis. Global governance in the field of military AI remains challenging (Maas Citation2019). States participating in the global debate on autonomy in weapon systems at the United Nations (UN) Convention on Certain Conventional Weapons have different stances on military AI regulation (Canfil and KaniaCitation2022). Previous arms control agreements have been associated with security interests such as preventing other actors from acquiring weapon technologies (Maurer Citation2018). However, the US and Russia are both skeptical towards international governance, displaying a preference for unregulated proliferation of military AI technologies. They are unlikely to change their positions anytime soon (Bode et al. Citation2023). The perceptions they hold of each other’s technological developments, combined with their broadened understandings of strategic stability, erode possibilities for arms control in this area, as for nuclear arms control (Bugos Citation2022).

Some lessons from nuclear arms control could be relevant for the prospects in the sphere of military AI. First, US officials consider strategic stability not as a mere military matter, but as a concept integrating perceived threats to democratic values from authoritarian regimes. The US’ framing of authoritarian competitors’ AI development, whether civilian or military, is bound to be treated as a threat to national and global security (Johnson Citation2021, 352). Previous research suggests that democracies consider nuclear weapons a necessary means to maintain the international order and are more reluctant to engage in arms control with autocracies (Becker, Müller, and Wisotzki Citation2008, 847). A similar dynamic is noticeable in the US discourse on AI, specifically in relation to China. Second, as Allison and Herzog (Citation2020) have noted, the Trump administration’s over-emphasis on China and its objective to have trilateral talks as a condition for pursuing arms control have undermined US-Russia strategic discussions. The US ongoing securitization of China’s AI development could also affect prospects for global governance in military AI (see Gibbons and Herzog Citation2022).

With the absence of international regulation and low prospects for global governance soon, the militarization of AI continues spreading unchecked. This proliferation resumes to fuel feelings of uncertainty and anxiety about others’ capabilities and intentions. While global governance is needed, one step forward would be increasing the degree of transparency concerning policies on military AI. All nuclear powers should be more open about their commitments to human control in the use of force, including in NC3. They should clarify their intentions by providing reassurances not only to each other, but also to the international society, that a human will always be taking use of force decisions in an appropriate manner.

Addressing uncertainties and competitive dynamics of military AI would also require confidence-building measures which would “facilitate information-sharing and increase transparency between states developing AI” (Horowitz and Kahn Citation2021, 34). Such measures do not replace arms control or legally binding regulations which are currently needed. However, by establishing some norms of appropriate behavior and ways of sharing information, they could increase predictability in the sphere of military AI and ease the securitization discourses (Puscas Citation2022, 2). While such recommendations aim to increase transparency and predictability, we also recognize the difficulties in moving forward with such initiatives in the next years. Russia’s invasion of Ukraine has shown the difficulties in trusting officials in Moscow, obstructing potential diplomatic efforts in arms control and disarmament for years, including in the sphere of military AI (Bugos Citation2022).

Conclusion

This article highlighted the importance of examining how US and Russian policymakers talk about AI technologies. Technology is developed and used in a social context through which actors interpret its impact on strategic stability. The United States and Russia view AI both as an opportunity and a threat to strategic stability through their perceptions of each other’s capabilities and intentions. Both states’ officials have been framing their perceived rivals’ AI capabilities as threats to strategic stability. Uncertainty and anxiety about AI developments has fuelled the perceptions that adversaries are using AI to accomplish their foreign policy goals (Istomin Citation2023; Johnson Citation2019b). For the United States, China and Russia are trying to revise the global world order, while for Russia, the United States and the so-called “collective West” are preventing Russia from affirming its great power status. While our analysis focused on the United States and Russia as the possessors of the largest nuclear arsenals, future research could look at what other actors, including China as a key military AI developer, make of AI and strategic stability. Such studies would complement extant literature on AI. It would also deepen our understanding of how technological change intersects with the concept of strategic stability in the twenty-first century.

The risks of misunderstandings and escalation have increased not only due to technical characteristics of AI, but also via the fuelling of discourses about the perceived threats of AI advances abroad. With the ongoing invasion of Ukraine, levels of trust between the United States and Russia and efforts to conduct dialogue on arms control or disarmament in any sphere are at a low (Arbatov Citation2022). Russia’s voluntary suspension of the New START treaty and both sides withholding some of their nuclear data risk exacerbating the uncertainty (Holland and Mohammed Citation2023). In Russia’s view, its participation in arms control negotiations has been part of its special role and “unique partner of the United States in managing strategic stability” (Suslov Citation2020, 124–26). Its war of aggression against Ukraine and violation of its previous international agreements have made it challenging to engage in any negotiations with Moscow. The potential for misperception of each other’s technological capabilities and intentions is higher than ever. As UN Secretary-General Antonio Guterres warned, humanity could be “just one misunderstanding, one miscalculation away from nuclear annihilation” (UN News Citation2022).

Acknowledgments

The authors are grateful to Ondřej Rosendorf and two anonymous reviewers for their feedback on previous drafts of this article.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

Anna Nadibaidze’s research was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 852123).

Notes

1 China is also a key developer of military AI technologies (Boulanin et al. Citation2020; Haner and Garcia Citation2019). However, it has a relatively smaller nuclear stockpile, estimated at 410 warheads (Kristensen, Korda, and Reynolds Citation2023).

2 These documents were collected from the official websites of the US public authorities such as the White House, the Department of Defense, and numerous national agencies publishing policies relating to AI.

3 These documents were collected from the official websites of the President of Russia, the Russian Government, the Ministry of Defense, as well as via a keyword search on the websites of Russian information agencies: RIA Novosti, TASS, Interfax. Documents in Russian have been translated by the authors. All mistakes in translation are our own.

4 Both systems reportedly integrate AI elements and are at an R&D stage (Boulanin et al. Citation2020, 50).

5 Similarly, a survey of Russian experts conducted at the end of 2021 shows that most experts consider space weapons as the “main factors” affecting strategic stability (Savelyev and Alexandria Citation2022).

6 In September 2022, Shoigu said that Russia is not only waging war against Ukraine, but also fighting with NATO and the “collective West” (TASS Citation2022a).

References