8,467
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

We’ll never have a model of an AI major-general: Artificial Intelligence, command decisions, and kitsch visions of war

ORCID Icon & ORCID Icon
Pages 116-146 | Received 03 Nov 2022, Accepted 24 Jul 2023, Published online: 07 Aug 2023
 

ABSTRACT

Military AI optimists predict future AI assisting or making command decisions. We instead argue that, at a fundamental level, these predictions are dangerously wrong. The nature of war demands decisions based on abductive logic, whilst machine learning (or ‘narrow AI’) relies on inductive logic. The two forms of logic are not interchangeable, and therefore AI’s limited utility in command – both tactical and strategic – is not something that can be solved by more data or more computing power. Many defence and government leaders are therefore proceeding with a false view of the nature of AI and of war itself.

Acknowledgements

The authors would like to thank the reviewers for their helpful comments, aiding us in clarifying our argument. Our thanks also to the group of scholars who kindly attended our paper workshop at the University of Leicester and provided suggestions at an early stage, and to Dr Clare Stevens for thoughtful feedback on a later draft. Finally, thanks are due to Dr Brian Weeden for introducing us to Dr Erik Larson’s book and thereby unwittingly providing the initial spark to write this paper.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 Lawrence Freedman, Command: The Politics of Military Operations from Korea to Ukraine (Allen Lane, 2022), 2.

2 Freedman, Command, 1-9.

3 Carl von Clausewitz, On War, O.J. Matthijs Jolles, trans. in: Caleb Carr, ed. The Book of War (New York: Modern Library, 2000), 328-354.

4 Clausewitz, On War (Jolles), 786-787.

5 Yee-Kuan Heng, ‘Reflexive Rationality and the Implications for Decision-Making’ in Heidi Kurkinen (ed.) Strategic Decision-Making in Crisis and War, (Helsinki, National Defence University 2010), 21-22.

6 Yee-Kuan Heng, ‘Reflexive Rationality’, 22.

7 Ibid, 22-23.

8 See section 1. See also Hoffman, ‘Will War’s Nature’, 22, 27-28; Goldfarb and Lindsay, ‘Prediction and Judgment’, 39.

9 Andreas Herberg-Rothe (Citation2014) ‘Clausewitz’s Concept of Strategy – Balancing Purpose, Aims and Means’, Journal of Strategic Studies, 37/6-7, 904.

10 Raymond Aron, Clausewitz: Philosopher of War, Christine Booker and Norman Stone (trans.), (Englewood Cliffs: Prentice-Hall, 1985), 328.

11 Brett A. Friedman, On Operations: Operational Art and Military Disciplines (Annapolis, M.D.: Naval Institute Press, 2021).

12 Ibid, 5.

13 Maaike Verbruggen, ‘AI & Military Procurement: What Computers Still Can’t Do’, War on the Rocks (blog), 5 May 2020, https://warontherocks.com/2020/05/ai-military-procurement-what-computers-still-cant-do/. See also Sam Tangredi and George Galdorisi, ‘Introduction’, in Sam Tangredi and George Galdorisi (eds.) AI at War: How Big Data, Artificial Intelligence and Machine Learning are Challenging Naval Warfare (Annapolis, M.D.: Naval Institute Press, 2021), 3.

14 Elke Schwarz, ‘Autonomous Weapons Systems, Artificial Intelligence, and the Problem of Meaningful Human Control’, The Philosophical Journal of Conflict and Violence 5/1, 53-72; John Emery, ‘Algorithms, AI, and Ethics of War’, Peace Review 33/2 (2021), 205-212; Neil Renic, ‘A Gardener’s Vision: UAVs and the Dehumanisation of Violence’, Survival 60/6, 57-72; Heather Roff, ‘The Strategic Robot Problem: Lethal Autonomous Weapons in War’, Journal of Military Ethics 13/3 (2014), 211-227; Lucy Suchman, ‘Algorithmic warfare and the reinvention of accuracy’, Critical Studies on Security 8/2 (2020), 182.

15 Erik Larson, The Myth of AI: Why Computers Can’t Think the Way We Do (London: Harvard UP), 60-61.

16 DoD, ‘Summary of the 2018 Department of Defense Artificial Intelligence Strategy’, (Washington DC: GPO 2019), 11, https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.

17 See for example: Kathleen McKendrick, ‘The Application of Artificial Intelligence in Operations Planning’, NATO STO (2017); Michael Rüegsegger et al., ‘Deep Self-optimizing Artificial Intelligence for Tactical Analysis, Training and Optimization’, NATO STO (2021); Alex Wilner, ‘Artificial Intelligence and Deterrence: Science, Theory and Practice’, NATO STO (2019), 14-11.

18 MoD, ‘Defence Artificial Intelligence Strategy’, (London: HMG 2022), 1. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1082416/Defence_Artificial_Intelligence_Strategy.pdf; see also Helen Warrell, ‘UK military planners deploy AI to gain edge over adversaries’, Financial Times, 12 March 2021. https://www.ft.com/content/94d59a36-099a-4add-80d3-475127b231c7.

19 Avi Goldfarb and Jon R. Lindsay, ‘Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War’, International Security 46/3 (2022), 9-10.

20 See Yuna Wong et al., ‘Deterrence in the Age of Thinking Machines’ (Santa Monica: RAND 2020) for a discussion of AI’s escalatory tendencies. See footnote 2 for literature providing ethical critiques.

21 MoD, ‘Defence AI Strategy’, 15.

22 Kenneth Payne, I, Warbot: The Dawn of Artificially Intelligent Conflict, (London: Hurst 2021), 83.

23 Ibid, 186-188.

24 Ibid, 192.

25 For example Payne argues some aspects of war are ‘less bounded’ but then later argues that war is unbounded, see Payne, I Warbot, 2, 76, 174.

26 Goldfarb and Lindsay, ‘Prediction and Judgment’, 50.

27 Ibid, 48.

28 Ibid.

29 N. Katherine Hayles, Unthought: The Power of the Cognitive Nonconscious, (Chicago: Chicago UP 2017), 10-11, 24.

30 Eda Kavlakoglu ‘AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?’ IBM Cloud Blog, 27 May 2020. https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks. ‘Narrow’ AI can be contrasted against ‘general’ or ‘strong’ AI. A ‘general’ AI only exists in science fiction, and there is no known program of research that could lead to it, according to a former insider. See Larson, Myth of AI.

31 Wong et al., ‘Deterrence in the Age of Thinking Machines’, 20.

32 Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W.W. Norton 2018), 163-164.

33 Julian S. Corbett, Principles of Maritime Strategy (Mineola: Dover 2004) 165-166.

34 See Matthew Price, Stephen Walker, and Will Wiley. ‘The Machine Beneath: Implications of Artificial Intelligence in Strategic Decision Making’ PRISM 7/4 (2018), 92-105; Payne, I Warbot, 28.

35 DARPA, ‘Generating Actionable Understanding of Real-World Phenomena with AI’, 4 Jan. 2019. https://www.darpa.mil/news-events/2019-01-04.

36 Ibid.

37 James Johnson, ‘Delegating strategic decision-making to machines: Dr. Strangelove Redux?’, Journal of Strategic Studies 45/3 (2022), 9; Zhimin Zhang et al., ‘Artificial intelligence in cyber security: research advances, challenges, and opportunities’, Artificial Intelligence Review 55 (2022), 1045.

38 Department of Defence, ‘Summary of the Joint All-Domain Command & Control (JADC2) Strategy’, March 2022, 3. https://media.defense.gov/2022/Mar/17/2002958406/-1/-1/1/SUMMARY-OF-THE-JOINT-ALL-DOMAIN-COMMAND-AND-CONTROL-STRATEGY.PDF. See also Lucy Suchman, ‘Imaginaries of omniscience: Automating intelligence in the US Department of Defense’, Social Studies of Science (2022), 1-26.

39 Payne, I Warbot, 2, 76.

40 Patrick Tucker, ‘An AI Just Beat a Human F-16 Pilot In a Dogfight – Again’, DefenseOne, 20 August 2020, https://www.defenseone.com/technology/2020/08/ai-just-beat-human-f-16-pilot-dogfight-again/167872/; James Johnson, ‘Automating the OODA loop in the age of intelligent machines: reaffirming the role of humans in command-and-control decision-making in the digital age’, Defence Studies, (2022), 10; Amir Husain, ‘AI is Shaping the Future of War’, PRISM 9/3, 54; Payne, I Warbot, 92; Jazmin Furtado and Chris Dylewski, ‘AlphaDogfight should scare the Air Force straight … into scaling AI efforts’, C4ISRNet, 21 January 2021, https://www.c4isrnet.com/thought-leadership/2021/01/21/alphadogfight-should-scare-the-air-force-straight-into-scaling-ai-efforts/.

41 Richard Spencer, ‘Killer drones used AI to hunt down enemy fighters in Libya’s civil war’, The Times, 3 June 2021, https://www.thetimes.co.uk/article/killer-drones-used-ai-to-hunt-down-enemy-fighters-in-libyas-civil-war-2whlckdbm.

42 Goldfarb and Lindsay usefully doubt the validity of Alpha Dogfight as evidence because it was fooled by a simpler threat after becoming overly accustomed to complexity. Goldfarb and Lindsay, ‘Prediction and Judgement’, 35

43 Palantir, ‘AIP for Defense’, Palantir.com, 2023, https://www.palantir.com/platforms/aip/.

44 Clausewitz, On War, 328-329.

45 Examples of direct favourable references include Payne, I, Warbot, 96-98; James Johnson, AI and the Future of Warfare, 30, 115; see Suchman, ‘Imaginaries of Omniscience’ for a comprehensive analysis.

46 Handel, Masters of War, 353-360.

47 Ibid., 355.

48 See Wong et al. 6 for their link between this ‘benefit’ and Boyd.

49 James Johnson, ‘Artificial intelligence & future warfare: implications for international security’, Defense & Security Analysis 35/2 (2019), 148.

50 Ibid, 150.

51 Ayoub and Payne, ‘Strategy in the Age of AI’, 799.

52 Goldfarb and Lindsay, ‘Prediction and Judgment’, 20, 35, 42.

53 Francis G. Hoffman, ‘Will War’s Nature Change in the Seventh Military Revolution?’, Parameters, 47/4 (2017), 22, 27.

54 Keith Dear, ‘AI and Decision-Making’, RUSI Journal 164/5-6 (2019), 25.

55 Ibid, 20, see also 22-23.

56 Johnson, ‘Delegating strategic decision-making to machines’, 8.

57 John Arquilla, Bitskrieg: The New Challenge of Cyberwarfare (Polity, 2021), 78.

58 McKendrick, ‘The Application of AI in Operations Planning’, 2.1-6

59 See also Payne, I, Warbot, 4, 68, 170-171.

60 Dear, ‘AI and Decision-Making’, 23.

61 Ayoub and Payne, 794.

62 Hoffman, ‘Will War’s Nature’, 22.

63 Payne, I, Warbot, 68, 152, Arquilla, Bitskrieg, 83-84. Aycock and Glenney, conversely, correctly argue that ‘AlphaGo ain’t warfare, and it ain’t strategy’ but seem to accept some ‘tactical’ nous of AI. As our subsequent section shows, this argument is not pessimistic enough because it does not assess the logical basis of AI decision making. See Adam Aycock and William Glenney, ‘Trying to Put Mahan in a Box’, in Sam Tangredi and George Galdorisi (eds.) AI at War: How Big Data, Artificial Intelligence and Machine Learning are Challenging Naval Warfare (Annapolis, M.D.: Naval Institute Press, 2021), 265-285.

64 Feng-hsiung Hsu, ‘IBM’s Deep Blue Chess Grand Master Chips’, IEEE MICRO, 19/2, (1999), 70-71.

65 Ibid, 71-72.

66 Ibid, 76.

67 Fei-Yue Wang et al., ‘Where does AlphaGo go: from church-turing thesis to AlphaGo thesis and beyond’, IEEE/CAA Journal of Automatica Sinica, 3/2 (2016), 115.

68 Ibid, 116; Larson, Myth of AI, 125.

69 Larson, Myth of AI, 162; Diego Perez et al., ‘Multiobjective Monte Carlo Tree Search for Real-Time Games’, IEEE Transactions on Computational Intelligence and AI in Games 7/4, (2015), 348.

70 Richard Waters, ‘Man beats machine at Go in human victory over AI’, Ars Technica, 19 Febraury 2023, https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/

71 Ibid.

72 Dear, ‘AI and Decision-Making’, 23-34.

73 A product of AI optimists’ selectivity – many manual wargames avoid battle-centrism and make pains to simulate friction.

74 Clausewitz, On War, (Jolles) 266.

75 Ibid, 277.

76 Aron, Clausewitz, 57-59.

77 Musashi, Book of Five Rings, 77.

78 Clausewitz, On War, 321.

79 See also Michael Howard, ‘The Use and Abuse of Military History’, Parameters, 11/1 (1981), 13.

80 Alexis Madrigal, ‘How Checkers Was Solved’, The Atlantic, 19 July 2017, https://www.theatlantic.com/technology/archive/2017/07/marion-tinsley-checkers/534111/.

81 Feng-Hsiung Hsu, ‘Cracking Go’, IEEE Spectrum, October 2007, 51-55.

82 Payne (see I Warbot, 40-42) mentions Gödel and Turing’s proofs of mathematical incompleteness, but does not explain how AI will overcome it or engage with the biggest hurdle we identify – undecidability.

83 Alex Churchill et al., ‘Magic: The Gathering is Turing Complete’, arXiv.org, https://arxiv.org/abs/1904.09828 (2019), 2.

84 Ibid.

85 Churchill quoted in Jennifer Ouellette, ‘It’s possible to build a Turing machine within Magic: The Gathering’, ArsTechnica, 23 June 2019. https://arstechnica.com/science/2019/06/its-possible-to-build-a-turing-machine-within-magic-the-gathering/.

86 See Hannah Jane Parkinson, ‘Paul the octopus, Taiyo the otter and the World Cup’s other psychic animals’, The Guardian, 12 December 2022, https://www.theguardian.com/sport/2022/dec/12/paul-the-octopus-taiyo-the-otter-world-cup-psychic-animals.

87 Herberg-Rothe, ‘Clausewitz’s Concept of Strategy’, 905.

88 Clausewitz, On War¸270.

89 Ibid, 280.

90 Ibid, and see also 355-357.

91 Ibid, 289-290.

92 With the exception of Deep Blue.

93 Larson, Myth of AI, 1, 41.

94 Dear, ‘AI and Decision-Making’, 18.

95 Job de Grefte, ‘Epistemic benefits of the material theory of induction’, Studies in History and Philosophy of Science 84, (2020), 101.

96 Larson, Myth of AI, 115.

97 Leah Henderson, ‘The Problem of Induction’, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), (2020) https://plato.stanford.edu/archives/spr2020/entries/induction-problem/.

98 As quoted in Henderson ‘Problem of Induction’, n.p.

99 Larson, Myth of AI, 124, Jochen Runde, ‘Dissecting the Black Swan’, Critical Review 21/4, (2009), 491-505.

100 See for example Davor Lauc, ‘Machine Learning and the Philosophical Problems of Induction’ in Sandro Skansi (ed.), Guide to Deep Learning Basics (Springer 2020), 93–106.

101 See Mary Cummings, ‘Artificial Intelligence and the Future of Warfare’, Chatham House (2017), 7. https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf.

102 Andrew Ilachinski, ‘AI, Robots, and Swarms Issues, Questions, and Recommended Studies’, CNA (2017), 65 https://www.cna.org/archive/CNA_Files/pdf/drm-2017-u-014796-final.pdf.

103 Watson was also programmed with human-defined rules to guide its inductive inference-making. See Larson, Myth of AI, 222-224.

104 Guglielmo Tamburrini ‘Artificial Intelligence and Popper’s Solution to the Problem of Induction’, in Ian Jarvie et al. (eds.), Karl Popper A Centenary Assessment Volume II: Metaphysics and Epistemology, (Aldershot: Ashgate 2006), 265-284

105 Tamburrini, ‘AI and Popper’, 267.

106 Aron, Clausewitz, 113.

107 David Chandler, The Campaigns of Napoleon (London: Weidenfeld and Nicholson, 1993), xl.

108 Henry Lloyd, The History of the Late War in Germany, Vol. I (1766) in: Patrick J. Speelman, ed. War, Society and Enlightenment: The Works of General Lloyd (Leiden: Brill, 2005) 114.

109 Wong et al., ‘Deterrence’, 19-20, Larson, Myth of AI, 54.

110 Miyamoto Musashi, The Book of Five Rings, Thomas Cleary trans., ed. (London: Shambala, 2003), 14, 22.

111 Ibid, 24.

112 Goldfarb and Lindsay, ‘Prediction and Judgment’, 34, 44.

113 Hoffman, ‘Will War’s Nature’, 28.

114 Clausewitz, On War, 277-278.

115 See Jon T. Sumida, Inventing Grand Strategy and Teaching Command: The Classic Works of Alfred Thayer Mahan Reconsidered (Washington, D.C.: Woodrow Wilson Center Press, 1997), 106.

116 Clausewitz, On War, 278-279.

117 Igor Douven, ‘Abduction’, in Edward Zalta (ed.), The Stanford Encyclopedia of Philosophy, (Summer 2017). https://plato.stanford.edu/archives/sum2017/entries/abduction/.

118 as quoted in The Croker Papers, Louis Jennings (ed.), Vol. II, (New York: Scribner 1884), 463.

119 Clausewitz, On War, 308.

120 See for example Payne, I Warbot, 25-26, 54-55, Kenneth Payne, Strategy, Evolution and War, (Washington DC: Georgetown UP 2018), passim.

121 DARPA (Citation1992) ‘73 EASTING: Lessons from Desert Storm via Advanced Distributed Simulation Technology’, (Alexandria: IDA 1992). https://apps.dtic.mil/sti/pdfs/ADA253991.pdf.

122 Bruce Sterling (Citation1993) ‘War is Virtual Hell’ Wired, 1 Jan. 1993. https://www.wired.com/1993/01/virthell/

123 Sharon Weinberger (Citation2017) Imagineers of War, 288-290.

124 DARPA (Citation1992) I-9.

125 Payne, I Warbot, 16.

126 Sina Alemohammad et al., ‘Self-Consuming Generative Models Go MAD’, arXiv.org (2023), https://arxiv.org/pdf/2307.01850.pdf.

127 Payne, I, Warbot, 1, 74.

128 Clausewitz, On War, (Jolles) 338, 340

Additional information

Funding

The work was supported by the European Research Council [866155].

Notes on contributors

Cameron Hunter

Cameron Hunter is a Research Associate in Nuclear Politics at the School of History, Politics, and International Relations, University of Leicester.

Bleddyn E. Bowen

Bleddyn E. Bowen FHEA is Associate Professor of International Relations at the School of History, Politics, and International Relations, University of Leicester, specialising in strategic theory, space warfare, and astropolitics. He has published widely on space policy, doctrine, and strategy, including two monographs: Original Sin: Power, Technology and War in Space (Hurst/Oxford University Press, 2022) and War in Space: Strategy, Spacepower, Geopolitics (Edinburgh University Press, 2020). He has advised on space policy and strategy to military and government institutions across the transatlantic community and beyond. He founded and co-convenes the British International Studies Association’s Astropolitics Working Group, and is an Associate Fellow of the Royal United Services Institute.