4,941
Views
4
CrossRef citations to date
0
Altmetric
Articles

Speculative machines and us: more-than-human intuition and the algorithmic condition

ORCID Icon

ABSTRACT

In the wake of Turing’s ‘universal machine’, this article foregrounds intuition as a generative concept and lens to unfold the affective genealogies of human-machine relations in post-war transatlantic cultures. As a mode of sensing, knowing, anticipating, and navigating the world that exceeds rational analysis, intuition is, I will argue, vital to attuning to our contemporary ‘algorithmic condition’, in which machine learning technologies are actively re-distributing cognition across humans and machines, transforming the nature of (in)human experience, and rearticulating questions of cultural value and desire. The article focuses on three key historical moments which enable us to retrospectively glimpse an emerging condensation of interest and urgency concerning our changing relationships with ‘new’ technologies in Britain and North America – 1) 1950s: The birth of AI and cybernetics; 2) 1980s: The rise of the personal computer and software cultures and; 3) 2010s: Inhabiting algorithmic life. In each period, particular aspects of intuition surface as significant in animating our affective and cultural entanglements with computational technologies. While intuition has gained affective traction at particular historical junctures as both what essentially defines ‘the human’ and what has become essentially inhuman, I argue that addressing the sensorial, socio-political, cultural, and ethical issues current machine learning architectures open up requires attuning to immanent human-algorithmic entanglements and the techno-social ecologies they inhabit and recursively reshape.

In the early summer of 1935, the 24-year-old British mathematician Alan Turing set off for a long run from his academic lodgings in Cambridge to the town of Grantchester. On his route along the river Cam, Turing would have negotiated hard-packed dirt, loose gravel, and long grass, with the sun on his back and a medley of insects, birds, and flowing water as his auditory companions. As his distance increased, Turing would likely have focused his attention inwards, measuring his steps by the pounding of his heart and the rhythmic inhale and exhale of his breath. And he might, then, have entered the quasi-meditative state that long-distance running can produce – a state in which a different kind of thinking and feeling becomes possible, problems get turned around in new ways, and insights seem to rise out of thin air. Indeed, as Turing later notedFootnote1, it was lying exhausted post-run in a field in Grantchester that he had the startling intuition that led to his conceptualization of the ‘universal machine’ (Turing Citation1936) – and the enormous implications it would have for his Enigma code-breaking during the Second World War, the invention of the first general-purpose digital computer in 1945, and the post-war consolidation of cybernetics, AI, and computer science.

What had come to Turing in the field that day was the route to answering the unsolved Hilbert’s problem – the Entscheidungsproblem – concerning whether mathematics was ‘decidable’ which, in this context, referred to ‘the quality of being fixed in advance, in such a way that nothing new could arise’ (Hodges Citation1983, p. 123). In order to approach the problem of decidability, Turing abstracted the quality of being determined and applied it to the manipulation of symbols. Could we, he asked, imagine an automatic machine which would employ a mechanical process involving symbols to read a ‘mathematical assertion presented to it, and eventually writ[e] a verdict as to whether it was provable or not’? By devising a novel formulation of the old concept of ‘algorithm’, Turing’s computational thought experiment showed that ‘there was no “miraculous machine” that could solve all mathematical problems’, and therefore the answer to Hilbert’s question was ‘no’ (Citation1983, p. 124). Yet the dramatic potential remained, he illustrated, for a universal machine that, through its algorithmic programming, was capable of simulating the actions of any other machine.

When Turing’s response to Hilbert’s problem was published in his 1936 paper ‘On Computable Numbers’, it was received by the pure mathematics world as the unexpected genius of an unsophisticated outsider. As his biographer Andrew Hodges notes, Turing had ‘attacked the problem in a peculiarly naïve way, undaunted by the immensity and complexity of mathematics’ (Citation1983, p. 124). Yet while Turing’s expansive style of thinking might be deemed naïve or unsophisticated, it could also be described as intuitive and speculative – as manifesting ‘a rapid, fluid, involved’ mode of action-perception (Dreyfus and Dreyfus Citation1985, p. 28), bound not by the probable or the expected but oriented towards possibility and discovery. Today, of course, the logics of the Turing machine have been actualized in computing technology, the internet, and a wide range of algorithmic architectures which, for better or worse, constitute the emergent digital circuits of everyday mediated life. At the time, however, the universal machine was not a tangible electronic device, but rather a speculative machine; a mathematical description of a possible future automated technology.

In the wake of Turing’s mathematical vision, the central claim this article makes is that intuition is a generative concept and lens to unfold the affective genealogies of human-machine relations in post-war transatlantic cultures. The opening vignette illuminates key elements of intuition that interest me: its experience in human life as the product of unfolding mind–body-environment interactions, its entanglement of conscious and less-than-conscious modes of thought, and its affective attunement to potentiality and change in-the-making. Also at stake in Turing’s speculative thought experiment, however, is intuition’s origins and logics as an inhuman technology of anticipation, pre-emption, and prehension – and the significant ramifications of this for current computationally-oriented modes of existence. Whether in the form of personal recommenders like Amazon and Netflix which mobilize self-taught software to anticipate our preferences, needs, and desires; context-aware sensors embedded in ‘smart homes’ or wearable computational devices that attune to our unfolding movements, routines, and habits; or autonomous vehicles powered by AI computer vision that continuously scan their environment to predict potential changes, algorithmically-mediated forms of intuition now play a central role in daily life ­­– and may, as such, be significantly transforming ‘the human’.

As a mode of sensing, knowing, anticipating, and navigating the world that exceeds rational analysis, intuition is, I will suggest, vital to attuning to our contemporary ‘algorithmic condition’ (Coleman et al. Citation2018) – in which machine learning technologies are actively re-distributing cognition across humans and machines and profoundly changing ‘what it means to perceive and mediate things in the world’ (Amoore Citation2020, p. 16; McKenzie Citation2017). Intuition is particularly relevant to the pre-emptive dynamics of ‘algorithmic culture’ (Galloway Citation2006, Striphas Citation2015) through which the meaning of culture itself is being reinterpreted, as ‘questions of cultural authority’ are increasingly delegated to engineers, techniques, and algorithms in ways that translate ‘quality and hierarchy’ into datalogical ‘matters of fit’ (Hallinan and Striphas Citation2016, p. 122). With algorithmic architectures now acting intuitively to anticipate and shape desires, behaviour, assessments of value, and conditions of possibility across social, political, economic, and cultural domains in ways that far exceed human sensorial, cognitive, and perceptual capacities (Pedwell Citation2019, Citation2021b), renewed questions and anxieties emerge concerning human nature, agency, and sociality, as well as the ethics and politics of our relationships with speculative machines. Yet, as this article aims to illustrate, these questions and anxieties have a history following Turing’s universal machine, which attending to the logics of intuition can help us to sense, elicit, and dwell generatively within.

In a context in which existential anxieties about being replaced by the intelligent machines we create has long permeated our affective encounters with computational technologies (Finn Citation2015), how, I ask, can a focus on intuition foreground ‘an originary technicity’ which characterizes our ongoing relationship with the ‘liveness of the nonhuman’ (Clough Citation2018: xxxi)? Moreover, in dialogue with critical computational literatures that debate the contemporary utility of cybernetic visions and analogies in the midst of emergent machine learning architectures that operate otherwise to anthropocentric processes and experiences, how might a more-than-human understanding of intuition as distributed, collaborative, and recursive orient us more compellingly towards the deep and immanent forms of socio-technical and bio-machinic entanglement central to contemporary algorithmic life?

The article begins by surveying how intuition has been understood and mobilized across the overlapping realms of philosophy, psychology, affect theory, mathematics, computer science, and computational media. It then focuses on three key historical moments which, I suggest, enable us to retrospectively glimpse an emerging condensation of interest and urgency concerning our changing relationships with ‘new’ technologies in Britain and North America – 1) 1950s: The birth of AI and cybernetics; 2) 1980s: The rise of the personal computer and software cultures and; 3) 2010s: Inhabiting algorithmic life. I explore how, in each period, particular aspects of intuition surface as significant in animating our affective entanglements with computational technologies, and consider the particular versions of ‘the human’ they assume, contest, or (re)imagine. My objective is to unfold a partial and non-linear post-war genealogy of human-machine relations with critical implications for our present and future engagements with automated technologies and speculative machines. While intuition has gained affective traction at particular historical junctures as both what essentially defines ‘the human’ and what has become essentially inhuman, I argue that addressing the sensorial, socio-political, and ethical issues current machine learning architectures open up requires attuning to immanent human-algorithmic composites and the socio-technical ecologies they inhabit and recursively reshape.

Intuition, I seek to show, is vital to appreciating the nature and force of these algorithmic entanglements because of the ways in which it emerges from (and shapes emergence itself through) unfolding intertwinements of the human and the machinic, the cultural and the technological, the cognitive and the affective, and the conscious and the nonconscious. Approaching post-war computational ‘structures of feeling’ (Williams Citation1977) through the lens of intuition is generative, I suggest, precisely for how it troubles lingering investments in the bounded, autonomous, and intentional human as well as articulations of human-machine interaction that fail to address the fully imbricated and yet differentiated and asymmetrical qualities of such relations.

Intimating intuition

Colloquially, we might associate intuition with direct sensing or fast-thinking that bypasses rational deliberation. We might also describe intuition as an affective premonition, an embodied hunch, or a gut feeling based on experience. Intuition, however, has a long, and somewhat convoluted, intellectual history. Within Western philosophical traditions, it dates back at least as far as Plato, who understood intuition as intellectual perception, distinct from sensory perception, which corresponds to ‘the eternal’. For Plato, and later philosophers such as Descartes, intuition is about unlocking pre-existing knowledge – whether this pertains to mathematics, morality, or metaphysics – which is eternally valid (Chudnoff Citation2013, p. 2). The Platonic tradition has been contrasted with a line of theorizing associated with Kant, who approached intuition as essentially sensory in nature and therefore ‘limited by our sensory capacities’ (Citation2013, p. 11). Yet in figuring intuition as either primarily intellectual or primarily sensory, these opposing philosophies do little to conceptualize the immanent interplay of cognitive and affective processes which, I suggest, guides everyday modes of knowing, navigation, and speculation.

The publication of the French philosopher Henri Bergson’s An Introduction to Metaphysics in 1903, however, offers something quite different. For Bergson, intuition is a way of knowing that exceeds the intellect and which aims at ‘concrete knowledge or knowledge of the concrete’ (Lundy Citation2018, p. 24). Unlike analysis, which reduces objects to ‘elements already known’, intuition is, in Bergson’s view, a form of immersive engagement with the world which connects us with ‘what is unique’ and ‘consequently inexpressible’ in an object (Citation1903, p. 7). It is experience prior to, or in excess of, its translation into the parsing categories of representational and analytical thought. While Bergson aligns intuition with the capacity for sensing, he also departs from Kant, who he contends pours ‘the whole of possible experience into pre-existing moulds’ (Citation1903, p. 85). Given that both we and the objects we encounter are never static but rather always moving and becoming, intuition is, for Bergson, primarily about the experience of duration, process, and change. Moreover, in both Bergson’s writing and Gilles Deleuze’s later account of ‘Bergsonism’, intuition brings together ‘experience and experiment’ (Seigworth Citation2006, p. 118) to produce speculative knowledge about new and specific problems as they unfold in time.

Bergson’s interest in temporality and mobility, as well as the non-representational thrust of his approach, resonates with the contemporary ‘turn to affect’. As Gregory J. Seigworth (Citation2006) notes, although the Welsh cultural theorist Raymond Williams did not draw on Bergson explicitly, his account of ‘structures of feeling’ aligns with Bergsonian intuition. Most significantly, Bergson and Williams are each interested in how we encounter ‘pre-emergent’ social and material forces and relations; in how we become affectively attuned to that which hovers ‘at the very edge of semantic availability’ (Williams Citation1977, p. 134). Both thinkers, then, explore how we might sense change as it is happening – an imperative brought to life by the more recent affect scholarship of Kathleen Stewart, Erin Manning, and Lauren Berlant, in their varying modes of intuitively inhabiting the unfolding sensations of everyday life. In Cruel Optimism, for instance, the late cultural theorist Berlant describes intuition as a ‘process of dynamic sensual data gathering’ though which ‘we make reliable sense of life’, especially when our everyday habits and modes of navigating the world become disrupted. Intuition, from this perspective, is constituted recursively through lived experience, and thus ‘visceral response is a trained thing’ (Citation2011, p. 52).

In this particular way, Berlant’s vision intersects with cognitive psychologies and philosophies which understand intuition as a trained mode of action-perception. Think, for instance, of how, as the psychologist David G. Meyers puts it, ‘thanks to a repository of experience, a tennis player automatically – and intelligently – knows just where to run to intercept the ball, with just the right racquet angle … a near-perfect intuitive physics’ (Citation2002, p. 29). Or how, as a classic study by the computer science pioneers Herbert Simon and William Chase (Citation1973) demonstrated, expert chess players can reproduce the chess board layout after a mere five-second glance. Yet, if mainstream cognitive psychologists, philosophers, and behavioural economists assume a bounded individual and pay scant attention to the politics of intuition, Berlant is more interested in collective practices of anticipation in which ‘affect meets history, in all its chaos, normative ideology, and embodied practices of discipline and invention’ (Citation2011, p. 52). Similar to Bergson, intuition is, for Berlant, aligned with shared capacities for inhabitation, speculation, and transformation; it is vital not only to the everyday (and exceptional) ways that we navigate the world but also, fundamentally, to how we might together change it.

Across these critical theories and philosophies, intuition involves the ongoing interplay of conscious and nonconscious modes of thought. Throughout his writing on intuition, habit, temporality, and memory, Bergson (Citation1889, Citation1896, Citation1903) figures human activity as informed by fluctuating modes of (non)consciousness and (in)attention and suggests that it is, in part, the less-than-conscious aspects of behaviour that enable ingenuity, creativity, and discovery. Bergson’s framework overlaps, in this respect, with mathematical intuition. For Turing (Citation1936, Citation1950), intuition is a mathematical faculty that ‘consists of making spontaneous judgements that are not the result of conscious trains of reasoning’; a process he links to the ‘ingenuity’ of building rules as arrangements of propositions (paraphrased in Amoore Citation2020, p. 57). Or, as the nineteenth-century French mathematician and philosopher of science Henri Poincaré puts it: ‘It is by logic that we prove. It is by intuition that we discover’ (cited in Meyers Citation2002, p. 63). One key implication of these accounts is an imperative to relinquish our persistent attachment to human-centric notions of will, agency, and intentionality – to move away more decisively from ‘the notion that it is the human agent, the intentional, volitional subject who determines what comes to be’ (Manning Citation2016, p. 3).

In thinking intuition beyond ‘the human’, these engagements resonate with digital media scholarship which observes the range of emergent computational processes and systems that now entangle human and non-human modes of sensing, perception, and cognition. In Unthought: The Power of the Cognitive Nonconscious, N. Katherine Hayles, for instance, conceptualizes ‘a planetary cognitive ecology’ in which cognition is engaged in by ‘technical systems as well biological life-forms’ and agency is thus more-than-human, distributed, and ‘punctuated’ (Citation2017, p. 3).Footnote2 Yet while Alan Newell and Herbert Simon announced in 1958 that ‘intuition, insight, and learning are no longer the exclusive possessions of human beings and any large high-speed computer can be programmed to exhibit them also’ (cited in Dreyfus and Dreyfus, Citation1985, p. 3), various intuitive human gestures and capacities – whether folding laundry, recognizing hand-written characters, or distinguishing ‘enemy combatants’ from civilians (Suchman Citation2011, Citation2019) – have proven stubbornly difficult to replicate with machine intelligence, given the contextual embodied awareness they seem to require. This, however, is now arguably changing with techniques such as deep reinforcement learning, which, in enabling a kind of generalization that affords AI greater flexibility, promise to bring us tantalizingly closer to the ‘holy grail of … artificial general intelligence’ (Fazi Citation2020, p. 2).

In their recent survey of tech journalism, for example, Jacob Johannsen and Xin Wang note the rise of ‘artificial intuition’ as an industry buzzword referring to the ability of ‘AI systems to make intuitive choices and respond intuitively to problems’ through ‘subconscious pattern recognition’ (Citation2021, p. 175-6). The term ‘intuition’ is linked primarily here to the capacity of advanced machine learning algorithms, such as neural networks, to learn, experiment, classify, and predict by continually ‘extracting features from [their] data environments’ (Amoore Citation2020, p. 65). Within computer science, nascent research on artificial intuition in decision-making defines it as an automatic process ‘which does not search for rational alternatives, jumping to useful responses in a short period of time’ (Johnny et al. Citation2020, p. 464). Drawing on network representations of past knowledge and experience, artificial intuition ‘combine[s] logic and randomness’ to assess problem contexts characterized by ‘partial information’ and select effective courses of action (Citation2020, p. 466-7, 470). Tech industry visions of machine learning’s intuitive capacities are predictably celebratory: for Microsoft, cloud computing has ‘made us smarter’ and ‘more productive’, while artificial intelligence augments ‘humankind’s innate ingenuity’ (Smith and Shum Citation2018, p. 6, 35). Yet, as scholars of computational media suggest (Parisi Citation2013, Citation2019, Hansen Citation2015, Fazi Citation2020), current algorithmic modes of thought may offer less an augmentation of ‘the human’ than they do radically different modes of operation which are not subject to comparison to, or comprehension by, anthropocentric processes and capacities.

Critical computational literatures also examine the workings (and risks) of intuition within algorithmic governance, decision-making, and capitalization – in which machine learning increasingly acts ‘to control the flow of actions and future events’ (Bucher Citation2018, p. 28) and human experience is claimed ‘as free raw material for hidden commercial practices of extraction, prediction and sales’ (Zuboff Citation2019). Providing historical and socio-political context here, the political geographer Louise Amoore maps how, in the aftermath of Sept 11, 2001, emergent models for governing risk began to shift from a focus on statistical probability to algorithmic possibility. In this pre-emptive modality of risk, ‘data-led algorithms that model the movement of bodies or things across space coalesce with intuitive and speculative knowledges that imagine future scenarios’ (Amoore, Citation2013, p. 10). Amoore’s analysis resonates, in this respect, with the philosopher Brian Massumi’s account of ‘ontopower’ as a newly consolidated mode of power led by pre-emption. As curated by both states and capital, ontopower entails what Deleuze termed ‘control’: a form of power characterized by environmental control that is more processually intense and far-reaching than sovereign power, disciplinary power, and biopower. Ontopower is, in this view, an intuitive power to incite and orient emergence that ‘insituates itself into the pores of the world where life is just stirring, on the verge of being what it will become and yet barely there’ (Massumi Citation2015: xviii).

And this, perhaps, signals the crux of contemporary anxieties surrounding algorithmic life – the fear, not only, that our everyday habits, actions, and potentialities are being ‘pre-computed’ in the interests of powerful states and corporations, but also that the algorithmic recognition and reduction machine learning entails are foreclosing in advance the very possibilities of what the future could be. If intuition has been understood as a sensory-cognitive mode of attunement central to how we perceive, navigate, and transform our worlds, the growing ubiquity of artificial intuition raises concerns about the logics and reach of computational thinking and sensing, given how ‘algorithms now reconstruct and efface legal, ethical and perceived reality according to mathematical rules and implicit assumptions that are shielded from public view’ (Finn Citation2015, p. 20-21) – and, in turn, what human ‘nature’, agency, affect, and culture now constitute when ‘humans are lodged within algorithms, and algorithms within humans’ (Amoore Citation2020, p. 58). As I discuss elsewhere (Pedwell Citation2021a), these everyday computational systems also work unevenly, (re)producing hierarchical modes of (non)humanity through their ontopolitical, biopolitical, and geopolitical logicsFootnote3 – compelling attention to the regulation, exclusion, and violence which algorithmically-mediated intuition may entail.

In what follows, I consider how, and with what potential effects, intuition emerges as a key transatlantic theme and organizing lens across my selected post-war moments of the 1950s, 1980s, and 2010s. I am also interested throughout in what it might mean to hone an intuitive method for approaching this partial and non-linear genealogy; in how, that is, we can retrospectively attune to emergent formations that are, in Williams’ words, ‘social and material but … in an embryonic phase before [they] can become fully articulated and defined’ (Citation1977, p. 131). What makes intuitive sense to me, in this respect, is to begin in the middle where ‘force has not yet turned to form … where the event is still welling’ (Manning Citation2016, p. 15). As such, the next section enters the fray in the 1980s, with the emergence of a particular kind of speculative machine: the personal computer.

1980s: the rise of the personal computer and software cultures

With the transatlantic popularization of the personal computer, elements of Turing’s vision of the future of machine intelligence are brought to life in ways that generate a significant step-change in our affective and cultural relationships with computational technologies. If, in the 1940s and 1950s, digital computers were seen as ‘too fragile, valuable and complicated for nonspecialists’ (Rheingold Citation1985, p. 14), the 1980s mark the moment in Britain and North America when these new ‘thinking machines’ start to permeate public consciousness and transform everyday feelings, habits, and capacities as they enter homes, schools, and workplaces (Turkle Citation1984, Citation1995). Across the fledging software cultures surrounding these developments, I want to suggest, this period is animated by an affective composite of excitement and anxiety – optimism about the role personal computers might play in extending human potential, connectivity, and political engagement, alongside fear concerning the prospect that they would rapidly usurp human qualities, labour, and expertise. Within these ambivalent atmospheres, debates concerning the logics of intuition play a significant role in mediating increasingly uncertain and changing human-machine relations.

For the visionaries of interactive computing, the personal computer had a speculative future as a digital medium that could radically extend human senses, capabilities, and possibilities. As the historian Howard Rheingold writes in Tools for Thought: The History and Future of Mind-Expanding Technology, these scientists and engineers imagined how, beyond its traditional role as a calculating machine, the digital computer might also enhance our capacity ‘to speculate, build and study models, chose between alternatives, and search for meaningful patterns in collections of information’ (Citation1985, p. 15). What is particularly interesting, for our purposes, is how these post-war imaginings entangle thinking and sensing, cognition and affect, and calculation and intuition. In his 1963 paper ‘A Conceptual Framework for the Augmentation of Men’s Intellect by Machine’, for instance, the American computer pioneer Douglas Engelbart conjures a horizon of human–computer interaction in which our ‘native sensory, mental, and motor capabilities’ will be transformed within ‘an integrated domain where hunches, cut-and-try, intangibles and the human “feel for a situation” … coexist with powerful concepts, streamlined terminology and notion, sophisticated methods, and high-powered electronic aids’ (Citation1963, p. 1). Computing technologies open out here to an ‘integrated’ future in which intuitive forms of attuning, knowing, and navigating are generatively enmeshed with critical, analytical, and computational modes of thought.

These techno-social anticipations resonate with the STS scholar Sherry Turkle’s reflections on her MIT colleagues’ first encounters with computer simulation software in the 1980s. In MIT’s chemistry department, for example, the new PEAKFINDER programme could automatically analyze the molecular structure of a compound, saving students ‘painstaking hours at the spectrometer’. To some, the software was ‘liberating’ and ‘brought them closer to the chemistry by opening it up to visual intuition’ (Citation1995, p. 64). One faculty member notes, for instance, how ‘a student can take thousands of curves and develop a feeling for the data. Before the computer nobody did that because it was too much work. Now, you can ask a question and say, “Let’s try it”’ (Citation1995, p. 65). Yet what Turkle’s nuanced account also grapples with ­– in contrast to the more breathlessly affirmative imaginings of the computing pioneers – is the trepidation, ambivalence and, in some cases, outright contempt associated with these technologies. Some MIT physicists, for example, argued that the simulations ‘interfered with the most direct possible experience of [the physical] world’. In this view, relying on digital technologies ‘when you could directly measure the real world was close to blasphemy’ (65-66).

These affectively charged debates linked to the rise of personal computing elucidate the tensions inherent in contemporary critical deployments of Bergsonian intuition. From the perspective of Turkle’s 1980s computing advocates, ‘The machine doesn’t distance students from the real, it brings them closer to it’ (Citation1995, p. 65). Yet for Bergson, writing amidst the popularization of new photographic technologies of the previous century, ‘true intuition’ was ‘an empiricism’ that implied the need for direct embodied experience rather than technologically-mediated perception (Coleman Citation2008, p. 112). This, I want to suggest, compels us to consider the implications of mobilizing Bergson’s account of intuition today, when figuring sensing, perception, or cognition as outside of, or free from, digital mediation feels increasingly tenuous if not impossible – a point I return to later on. The technological ambivalence Turkle highlights throughout both The Second Self (Citation1984) and Life on the Screen (Citation1995) also, however, draws out wider mixed feelings, anxieties, and fears concerning the increasing centrality of computers, software, and AI in everyday life which become palpable during this period.

In this context, intuition, I want to argue, emerges most prominently across Britain and North America as a lived, embodied, and sensory capacity which distinguishes humans from machines. This is evident, for example, in Turkle’s interviews with adults and children in the late 1970s and early 1980s, who similarly define what it means to be human on the basis of what computers can’t do, which centres largely on ‘intuition, embodiment, emotions’ and the possibility of ‘spontaneity’ (Citation1995, p. 83; see also Turkle Citation1984). This framing of intuition as a defining quality of what it means to be human emerges more explicitly in the public and scholarly interventions of the American philosopher Hubert Dreyfus, who had argued since the 1960s that human intelligence was fundamentally different from computer intelligence and that, without embodied knowledge, computers were incapable of intellectual tasks that required intuition and experience. Dreyfus first articulated this position in his combative 1965 review of Newell and Simon’s AI research for the RAND corporation (the national research thinktank offering analysis to the US military) – which he later expanded in his influential 1972 book What Computers Can’t Do: A Critique of Artificial Intelligence. By the early 1980s, Dreyfus was focused on the emergence of autonomous computer systems involved in expert governance, which were projected as having the capacity to replace aspects of human decision-marking in political governance, international security, industry, science, medicine, and other professional realms (Rheingold Citation1985). These technological developments included the Star Wars Strategic Defense Initiative, a proposed missile defense system announced by President Ronald Reagan in 1983, to protect the US from ballistic strategic nuclear weapons and submarine launched ballistic missiles. Alongside the threat that AI was feared to pose to the labour market, then, was brimming anxiety concerning its use in the nuclear arms race on both sides of the Atlantic, which Dreyfus registered and reverberated.

Published in 1985, Dreyfus’s Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer is co-authored with his brother Stuart Dreyfus, who had been involved in programming the first generation digital computers. For the Dreyfus’s, computers, as used in AI, are analytic engines that specialize in rational computation – they can ‘apply rules and make logical inferences at great speed and with unerring accuracy’, but what they lack, which is fundamental to human conduct, is ‘a power of intuitive intelligence that enables us to understand, to speak, to cope skilfully with our everyday environment’ (Citation1985: xx). Intuition here is associated with tacit skills, embodied awareness, sensory navigation, and ongoing attunement to change, in ways that link to Bergson’s and Berlant’s accounts – as well as the work of STS scholars such as Lucy Suchman (Citation2007, Citation2019) (a former graduate student of Dreyfus), who has long identified situational awareness as exceedingly difficult to design into AI and robotics, and N. Katherine Hayles (Citation1999), who, in her earlier work, critiqued the disembodied nature of first-order cybernetics and AI. What is ‘frightening’ about the Star Wars defense system, Dreyfus argues, is that it requires anticipation of all possible contingencies in order to code ‘rules for response’ and, as such, the litheness and immanent insight of expert human intuition is ‘forfeited’ (Citation1985, p. 31).

If, in the 1940s and 1950s, enthusiasm for digital computing made the translation of ‘tacit’ human knowledge into ‘explicit’ machine knowledge seem straightforward – ‘no more than a technical problem on its way to being solved’ (Collins Citation2010, p. 7) – Dreyfus and other prominent critics in the 1970s and 1980s underscore the fundamental differences between computers and brains, and information processors and embodied beings. As the sociologist of science Harry Collins puts it, the promise of intelligent machines is often said to depend on ‘making the tacit explicit’; however, machines generally ‘cannot execute somatic-affordance tacit knowledge, because they are not made of the right kind of materials’ (Citation2010, p. 11). Today, smart technologies embedded in everyday environmental networks are ‘increasingly able to sense, read, and learn from patterns of user behaviour’ in ways that have been called ‘intuitive’, yet, as the media scholar Tony Sampson urges, ‘we should not confuse deep learning with deep entanglement’ (Citation2020, p. 19). Computers, Sampson claims, citing Collins, are fundamentally ‘mimicking machines’: they ‘can mimic complex chess or GO moves, but they are also prone to mimicking hostility, stupidity, and error at a subcritical level’ (19-20). Together, these perspectives not only foreground the distinctiveness of human intuition but also question the very possibility of ‘artificial intuition’ if intuition is, by definition, an embodied and situated capacity.

Yet, while contemporary scholars like Sampson are fundamentally interested in the immanent entanglement of humans and computers, for Dreyfus in the 1980s, what remains most important about human intuition is its difference from and superiority over ‘artificial’ expertise. The book’s ‘mind over machine’ motif is premised on the claim that computers are unable to deal with ‘uncertain data’ and have significant problems addressing change: calculating machines, the authors argue, lack the ability to form experience and to ‘apply what they know by recognizing the similarity of past experience with the present situation’ (Citation1985, p. 89). To contemporary ears, however, this may sound like an apt description of what machine learning algorithms now specialize in ­– with their recursive ‘capacity to engage experimentally with the world, to dwell comfortably with contingent events and uncertainties, and yet always be able to propose, or output, an optimal action’ (Amoore Citation2020, p. 15). This points to a key weakness of the kind of analysis Dreyfus offers: it defines ‘human uniqueness in terms of machine performance, a definition that [always has to] remain one step ahead of what engineers could come up with next’ (Turkle Citation1995, p. 129-30). It is thus interesting, if unsurprising, that a re-issued version of Mind over Machine appears just three years later, in 1988, with a new preface indicating the authors’ newfound awareness of ‘neural nets’ (a concept dating back to the 1940s) – which reconfigures their analysis of ‘what happens in the brain of the expert’ (1988: xii) and how, in turn, this might be achieved computationally.

More problematic, in my view, however, is how Dreyfus’s implicit desire to re-establish human control and mastery over ‘thinking machines’ requires a rigid view of humans and machines as radically discrete: in which computers can only be imagined as limited calculating machines and ‘the human’ is, in turn, both curiously bounded and the locus of intuitive agency – despite how foregrounding intuition might work precisely to disrupt our lingering investment in human intentionality and to address the immanent complexity of human-technology entanglements. At the same time, in the long wake of ‘the Turing Test’ (1950), Dreyfus remains tied to a ‘simulative paradigm’ in which the ‘success’ of AI is assessed by measuring how well intelligent machines are able to mimic human cognitive, sensorial, and perceptual capacities – instead of recognizing how such technologies may exhibit agencies ‘dramatically alien to human thought’ (Fazi Citation2020, p. 2).

At this point, it is pertinent to acknowledge that, although Bergsonian intuition may seem to require embodied experience which foregrounds human sensory-cognitive capacities and eschews the distancing effect of technological mediation, the method of intuition focuses on the specificity of experience ‘for the explicit purpose of going beyond the “turn of experience” to explore that which conditions it’ (Lundy Citation2018, p. 40). As Deleuze powerfully highlighted, Bergson ‘aims to open us up to the inhuman and superhuman (durations which are inferior or superior to our own), and go beyond the human condition’ (cited in Lundy Citation2018, p. 41). As such, Bergson’s framework may be more amenable to addressing unfolding socio-technical collaborations, and the nature and possibilities of algorithmically-mediated intuition, than it first appears ­– in ways that far exceed Dreyfus’s explicitly human-centred account.

If we travel back in time to encounter earlier work in AI and cybernetics, we also arguably find more complicated, expansive, and affectively rich accounts of human-machine collaborations and their speculative futures – which the next section explores. Turing’s universal machine and the cybernetic thinking of the 1940s and 1950s have been widely associated with reductive computer-brain analogies that ‘erase the embodied nature of information through abstraction’ (Finn Citation2015, p. 648). My desire in what follows, however, is to ‘see more rather than less’ in these early computational sciences (Wilson Citation2010: x-xi) – including how their intuitive articulations of possibility, indeterminacy, and entanglement complicate and open up superficially analogical frameworks.

1950s: the birth of AI and cybernetics

The year 1950 marked the publication of Turing’s most famous paper, ‘Computing Machinery and Intelligence’, in the philosophy journal Mind. Opening with the provocative question ‘Can machines think?’, Turing situates the unsettled relations between mind and machine, physiology and technology, and human and non-human at the centre of his mediations on artificial intelligence. Quickly deeming this original question too imprecise to operationalize, he suggests an imaginative thought experiment instead: ‘the imitation game’. As has now been much rehearsed in computational literatures, this game would ‘be played by three people, a man (A), a woman (B), and an interrogator (C)’, who would ‘stay in a room apart from the other two’ and whose objective would be, through strategic questioning, to determine which ‘is the man and which is the woman’. Having outlined the ground rules, Turing then asks, ‘What will happen when a machine takes the part of A in this game?’ (Citation1950, p. 433-4). When a digital computer can fool an interrogator into believing it is a human subject seven times out of ten, Turing suggests that we will have to accept the existence of machine intelligence.

Turing’s longstanding interest in ‘thinking machines’ was stimulated by the emergence of the first general purpose digital computers at the end of the Second World War – including the Small-Scale Experimental Machine (SSEM) at the University of Manchester, to which Turing directly contributed (Hodges Citation1983). Although these early computers lacked the memory capacity, electronic speed, and programming sophistication to be serious contenders in the imitation game, it is important to underscore that Turing’s interest in 1950 – as in 1936 – is primarily speculative rather than empirical: ‘we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well’ (Turing Citation1950, p. 436). Indeed, Turing’s imperative here is not to operationalize the imitation game – later known as ‘the Turing test’ – to prove the existence of machine intelligence, or to provide criteria by which we might definitively either equate or distinguish ‘human’ from ‘machine’ (as Dreyfus and others go on to do). Rather, it is, as Elizabeth A. Wilson argues in Affect and Artificial Intelligence (Citation2010), to voice a plea for imaginative expansiveness concerning the possible futures of machine intelligence – futures in which the boundaries between organic and inorganic, biology and technology, and human and machine are complex, emergent, and undetermined.

With respect to these questions of more-than-human boundaries, embodiments, and entanglements, the distinctly gendered element of the Turing Test (that the interrogator’s objective is to determine which of the other two participants ‘is the man and which is the woman’) is arguably not the ‘red herring’ some commenters have claimed.Footnote4 Instead, we might consider it vital to the expansive questions about human-machine relations that consumed Turing. As Hayles argues, by foregrounding gender in the imitation game, Turing implies that re-thinking ‘the boundary between human and machine would involve more than transforming the question of “who can think” into “what can think”’ (Citation1999: xiii). What the Turing test illuminates, from this perspective, is the wider contingency of ‘natural’ identity categories – whether concerning gender or other tenets of the liberal subject – while underscoring how computational technologies have become folded into embodied subjectivity to the extent that they cannot ‘be meaningfully separated from the human subject’ (xiii). While my intention here is not to posthumously cast Turing as a knowing purveyor of ‘gender trouble’ (Butler Citation1990), it does not seem irrelevant to note that two years after the publication of ‘Computing Machinery and Intelligence’ (1950), Turing was convicted of gross indecency for homosexuality and was prescribed twelve months of oestrogen injections (chemical castration) which caused him to begin growing breasts (Wilson Citation2010). In such conditions, questions of what constitutes ‘a man’ or ‘a woman’ may not have been far from Turing’s mind and may accordingly have informed his wider re-thinking of human-machine relations. We might say, then, that Turing’s intuitions about algorithmic life extended far beyond visions of ‘electronic brains’ to sense, if not fully articulate, wider emergent concerns about the normative dichotomies through which human nature is articulated.

While Turing’s intuitive and speculative approach to computational discovery reflected key aspects of his own subjectivity and life experiences, it also connected him to something bigger: the heady affective atmospheres of post-war technological innovation on both sides of the Atlantic. In this vein, we could, following Wilson (Citation2010), situate Turing’s work within what the affect scholars Eve Sedgwick and Adam Frank famously called ‘the cybernetic fold’: a period ranging from the 1940s to the 1960s involving the interaction between ‘postmodern and modern ways of hypothesizing about the brain and mind’ (Citation1995, p. 509). As a historical moment in which powerful new digital computers were on the horizon but ‘the actual computational muscle’ required to animate such technologies was not yet available (508), conditions were ripe for unbounded anticipations of their future possibilities. Alongside innovations in mathematical logic and computer engineering, this period witnessed the emergence of cybernetics as an ‘interdisciplinary science of communication, computation and automatic control’ (Conway and Seigelman Citation2005: xi), which would fundamentally refigure traditional mind/matter and animate/inanimate distinctions – while laying vital groundwork for the nascent fields of cognitive science and artificial intelligence.

In the US, the Macy conferences in cybernetics, inaugurated at the Beekman Hotel in New York City in 1942, brought together leading mathematicians including Norbert Weiner and Jon von Neumann with the pioneering neuroscientists Warren McCulloch and Walter Pitts, the information theorist Claude Shannon, and the anthropologists Margaret Mead and Gregory Bateson, among others. In Britain, the Ratio Club, which met in London between 1949 and 1958, enabled wide-ranging conversations among physiologists, mathematicians, and engineers, including Turing, as well as the neurophysiologist W. Grey Walter and the psychiatrist W. Ross Ashby. While Turing had envisioned his electric calculating machines as ‘giant brains’, the cyberneticists were the first to bring computational and neuro-physiological processes formally together – most influentially via McCulloch and Pitts’ early work on neural networks (Citation1943), which not only introduced the computer as a generative (if partial and problematic) model for the brain but also ‘consolidated the notion that computers ought to be built as digital machines’ (Wilson Citation2010, p. 11, 13). Wiener, who met with Turing during a visit to England in 1947, also retained a keen interest in computer engineering, having anticipated in the 1920s that computers should be constructed with digital rather than analogue specifications (Conway and Seigelman Citation2005).

Two years prior to Turing’s landmark 1950 paper, Weiner published Cybernetics: Or Control and Communication in the Animal and Machine (Citation1948), the best-selling book which effectively inaugurated the field of cybernetics for the American and international publics. Synthesizing resources from mathematics, engineering, computer science, information theory, neuroscience, neurophysiology, psychology, and decision theory, Wiener pioneered a statistical probabilities-based approach to communications engineering and introduced the concept of feedback as what fundamentally links humans and certain kinds of machines. ‘Feedback’ is understood, most basically, here as a process in which the outcomes of past actions are taken as inputs for future action (Wiener Citation1948, Citation1950), and it is this recursive cycle that constitutes the ‘learning’ of contemporary machine learning algorithms. Alongside the contributions of mathematicians Kurt Gödel, Alonzo Church and Turing (the Church-Turing thesis), Weiner’s cybernetic thinking was central to the twentieth century’s ‘transition from certainty to probability’ – as well as the emphasis on indeterminacy and the insight that ‘observation always affected the system being observed’ (Finn Citation2015, p. 27) which second order cybernetics of the 1960s and 1970s powerfully highlighted.

For jargon-heavy book filled with mathematical equations, Cybernetics was surprisingly popular, undergoing five print runs in its first six months and featuring in 1948 issues of Scientific American, Newsweek, and Time. Cybernetic ideas appealed to an American public who – at once optimistic about extending the technological innovations of WWII and yet sensitized to the horrors of the atomic bomb – had started making their own speculative links between humans and intelligent machines (Conway and Seigelman Citation2005). Moreover, on both sides of the Atlantic, cybernetics was more than a narrow computational engineering enterprise; it was, as the sociologist Andrew Pickering contends, a new way of thinking about the world premised on a ‘non-modern ontology’, in which reality is always ‘in the making’ and ‘people and things are not so different after all’ (Citation2010, p. 18). The cybernetic concept of ‘black boxes’, which recognized that elements of recursive technologies could not necessarily be grasped by humans and could, as such, surprise us, also suggested a philosophy of ‘unknowability’ at odds with the modernist ethos of transparency and elucidation.

What is particularly significant here, for our purposes, are the resonances between cybernetic accounts of feedback and philosophies of intuition. For Weiner, neither human beings nor intelligent machines are isolated systems: both possess ‘sense organs’; that is, ‘a special apparatus for collecting information from the outer world’ (Citation1950, p. 26) – information which is fed back into the system to inform future operations. We can consider Weiner’s description of the sensing capacities required for feedback in relation to Berlant’s (Citation2011) account of intuition as involving ‘sensual data gathering’ through which we navigate the emergent present and feel-forward into the future. For Berlant, similar to Bergson and Williams, intuition involves bodies ‘continually busy judging their environments and responding to the atmospheres in which they find themselves’ (Citation2011, p. 15). Early cyberneticists, in turn, built physical machines that could respond intuitively to their unfolding physical and algorithmic environments – whether the feedback mechanisms involved were ‘as simple as photoelectric cells which change electronically when a light falls on them’ or as complicated as those found within ‘high-speed electrical computing machines’ (Weiner Citation1950, p. 22, 24). Cybernetics, then, played a foundational role in actualizing the immanent, more-than-human, and technological aspects of intuition – albeit in ways that could lapse into too-easy human-machine equivalencies and gloss over vital embodied and situated particularities. As I will elaborate below, however, my interest here is not in reproducing reductive analogical models of ‘human’ and ‘artificial’ intuition, but rather in glimpsing more complex articulations of deep entanglement within these mid-century cybernetic accounts.

While Turing and Weiner shared an intuitive and speculative mode of computational thinking, they also laid conceptual, logical, and technical seeds for contemporary algorithmic modes of cognition, anticipation, and pre-emption. Like Turing, Weiner saw that genuine scientific innovation depended not on strict disciplinarity or organic/inorganic boundaries, but rather on imaginative engagement with ‘the neglected no-man’s land between the various established disciplines’ which troubled normative human-machine relations (Weiner Citation1948, p. 2). This became urgently apparent to Weiner during WWII when he researched antiaircraft fire control for the US government amid Germany’s catastrophic aerial attack on Britain. Wiener’s objective here, as the historian Peter Galison notes, was to design an intuitive algorithmic calculating device which could model ‘an enemy pilot’s zigzagging flight, anticipate his future position, and launch an antiaircraft shell to down his plane’ (Citation1994, p. 229). He saw that, when it came to a ‘pilot, flying amidst the explosion of flak, the turbulence of air, and the sweep of searchlights, trying to guide an airplane to a target’ (236), the statistical design of feedback mechanisms needed to address the unfolding interplay of machinic and human neurophysiological processes. While Wiener’s ‘antiaircraft predictor’ did not fully materialize during the war years, his collaborative work with the American computer engineer Julian Bigelow and the Mexican neurophysiologist Arturo Rosenblueth endowed him with an understanding of human-machine relations in which ‘soldier, calculator, and fire-power [operated as] a single integrated system’ (235). It was this model of human-machine assemblage, alongside the ideas of feedback systems and black boxes, that animated Wiener’s wider cybernetic vision.

In contrast to 1980s accounts of intuition as what fundamentally distinguishes human beings from intelligent machines, or contemporary figurations of cybernetics as having only reductive brain-computer equivalences to offer, the computational imagination of ‘the cybernetic fold’ prefigured a more-than-human intuitive capacity that ‘threatened the modern boundary between mind and matter’ (Pickering Citation2010, p. 18). Cybernetics thus brings into focus intuition’s operation, beyond the individual organism, as a shifting set of bio-technical relations that emerge from the ontological entanglement of human, machine, and environment. The intuitive, embodied thinking-in-motion conjured by Turing’s long distance running with which this article opened is, of course, vastly different from statistical feedback mechanisms designed to model ‘the tangle of physical and neurophysiological factors involved in antiaircraft fire control’ (Conway and Seigelman Citation2005, p. 122), or from how artificial intuition systems combine ‘logic and randomness’ to assess problem contexts characterized by ‘partial information’ (Johnny et al. Citation2020, p. 466-7). Yet, as I will argue in the final section of this article, all are part of a wider collective, distributed, and recursive intuition comprised of unfolding forms of sensing, knowing, anticipating, and navigating that imbricate human and non-human, and bio-physiological and technical-machinic entities, functions, and processes.

If, however, I sound overly nostalgic or naïve about the speculative promise of cybernetics and early AI, it bears emphasizing that the entanglement of these fields with military research and technologies is deeply problematic (Galison Citation1994; Pickering Citation2010). Indeed, the disquieting legacies of Wiener’s antiaircraft predictor are to be found in today’s cruise missiles and drone technologies. In turn, contemporary algorithmic governance, surveillance, and capitalization would not have been possible without Turing’s universal machine and the re-imagining of algorithmic logics and possibilities it entailed. As the media scholar Ed Finn argues, the Church-Turing thesis marks the inception point for a troubling ‘gravitational pull’ to organize the universe according to a computational logic which elides ‘crucial aspects of complex systems with abstracting gestures’ and reimagines value itself in computational terms (Finn Citation2015, p. 24, 51). And, with respect to algorithmic culture in particular, the more we delegate the work of adjudicating cultural value, significance, and desire to computational systems the more we bolster ‘a court of algorithmic appeal’ in which Amazon, Google, Facebook, Twitter, and Netflix preside as the ‘new apostles of culture’ (Striphas Citation2015, p. 407), and objects, ideas, and practices are heard, cross-examined, and judged independently, in part, of human beings’ (Hallinan and Striphas Citation2016, p. 129).

Nonetheless, while Wiener was prescient about the future qualities of computational technologies, he was also distinct among his colleagues in anticipating and seeking to pre-empt their dangers and, in that way, pioneered his own AI ethics and politics. After the US detonated the atomic bombs on Hiroshima and Nagasaki, Weiner refused to accept any further military-linked government research funding – a decision that severely impacted his own career trajectory (Conway and Seigelman Citation2005). He envisioned the coming links between algorithmic architectures and capitalist extraction and argued that AI should not be dictated by the interests of ‘the market’ but rather by the possibility of human flourishing; a perspective he outlined most influentially in The Human use of Human Beings: Cybernetics and Society (1950). Moreover, like Turing, Weiner foresaw the non-anthropocentric futures of machine learning and how, once algorithms became capable of learning and transforming themselves ‘at the level of policy’ (Citation1960, p. 59), they would work in ways we would not necessarily be able to sense, perceive, or understand – let alone control. These mid-twentieth century intuitions offer a useful bridge to our contemporary period, which, I suggest, is characterized by a growing awareness of the presence of algorithms in everyday life alongside an opaqueness (or black box) obscuring when, where, and how, exactly, processes of algorithmic recognition, surveillance, and modulation are operating.

2010s: inhabiting algorithmic life

Our current historical moment is one in which software, AI, and algorithms are increasingly shaping the conditions and possibilities of social existence – pushing forward Turing’s speculative account of a future in which ‘machines would exceed the rules-based decision procedures and extend to the affective pull of intuitions for data’ (Amoore Citation2020, p. 57). For the cultural studies scholar Ted Striphas, ‘algorithmic culture’ concerns ‘the enfolding of human thought, conduct, organization and expression into the logic of big data and large-scale computation’ (Citation2015, p. 396). The media theorist Taina Bucher, in turn, understands the ‘algorithmic imaginary’ to entail not only the ‘mental models’ that people develop in relation to algorithms but also ‘the productive affective power that these imaginings have’ (Citation2017, p. 41) – and, crucially, how, in ‘ranking, classifying, sorting, predicting, and processing data’ algorithms ‘make the world appear in certain ways rather than others’ (Citation2018, p. 3). With the post-millennial rise of Web 2.0 and a range of social media, entertainment, retail, governmental, and logistical platforms powered by machine learning, expressions such as ‘algorithmic life’, ‘algorithmic logic’, ‘algorithmic imagination’, and ‘algorithmic bias’ now punctuate public discourses, while critical commentators debate the nature and implications of ‘algorithmic thought’ in our unfolding ‘common space of decision, classification, prediction and anticipation’ (McKenzie Citation2017, p. 10). Our computational age, then, is one ruled by the spectre of the algorithm as an underlying mechanism for interpreting the universe; the product of the pioneering desires of Turing, Weiner and computing history more generally to ‘make the world effectively calculable’ (Finn Citation2015, p. 21, 26). If earlier historical periods foregrounded metaphors of brain as computer and computer as brain, today’s techno-social atmospheres open out to more disparate concerns about pervasive algorithmic architectures and ‘data fields [that] pass in and out of bodies’ (Clough et al. Citation2015, p. 103) – alongside a range of swirling sensations linked to the increasingly ‘pre-computed’ nature of everyday life.

We might say, then, that ‘the algorithmic’ animates an emergent structure of feeling for our times which bridges the Atlantic and extends far beyond.Footnote5 Like any structure of feeling, this one is brimming with intensity, and yet always unfolding and ‘not yet come, often not even coming’ (Williams Citation1977, p. 130). Resistant to comprehensive elucidation, it is more likely to be fleetingly glimpsed or intuitively sensed; a flickering awareness, that is, of one’s feelings, choices, and actions being tracked, anticipated, and shaped in ways that may be variously experienced as worrying, frustrating, pleasurable, humorous, numbing, or violent – or, perhaps, as an uncanny reminder of what Wendy Hui Kyong Chun (Citation2016) calls the inherent ‘creepiness’ of digital media. If ‘intuition’ describes how we might affectively register this computationally-mediated structure of feeling, it also signals something vital about its nature: a pulsating atmosphere conjured by algorithms attuned to our clicks, likes, and shares; social media platforms that automatically identify faces in photographs; and search engines that instantly interpret mistyped terms to return ‘the right’ results. Looking forward, IBM has a patent for a technology enabling search engines tailored to our ‘current emotional state’, as interpreted from facial recognition via webcams, heart rate scanning, and even measurement of brain waves (Fussell Citation2018) – a telling glimpse into the future of artificial intuition, and the intensified entanglements of human biological, physiological, affective, psychic, and behavioural data with computational and corporate infrastructures it portends.

Such machine learning innovations – including computer vision, natural language processing, increased contextual awareness, and a capacity to recognize the emotional tenor of human voices – are presented as allowing intelligent systems and devices to operate in a more fluid and intuitive way, ‘responding to us based not just on who we are, generally, but who we are in a given moment’ (Fussell Citation2018). Much of this speculative computational activity, however, is happening invisibly, or in the background, in ways we may only ephemerally discern; until, of course, something ‘goes wrong’ – whether this is as mundane as a humorously misaligned advertisement (Bucher Citation2018), or as consequential as being denied healthcare by an algorithmic system that interprets any application mistake as ‘failure to cooperate’ (Eubanks Citation2017). It is in these moments that incipient algorithmic atmospheres become terrifyingly palpable – and the socio-political values, habits, and hierarchies woven through them slide sharply into view.

What has been diagnosed as ‘the algorithmic condition’ (Coleman et al. Citation2018), however, is not only about how people think, talk, or become aware of algorithms in daily life, but also how the growing ubiquity of machine learning technologies may be transforming the nature of thought itself – and related processes of intuition, anticipation, and speculation. In Thumbelina: The Technology and Culture of Millennials (Citation2015), for instance, the late French philosopher Michel Serres suggests that our increasing delegation of mental synthesizing and processing to smart devices has produced a generation of digital humans programmed in an ‘algorithmic mode of thought’, which is procedural, technical, calculative, and data-oriented. Though as other media scholars have argued, the rise of algorithmic thought involves not only ‘the insertion of procedurality and quantification into human experience’ (Bucher Citation2018, p. 31) but also the implications of interchanges among procedurality and computational error, undecidability, leaky ‘closed’ systems, and bad data entry and capture processes (Sampson Citation2020) – as well the ways in which machine learning practices are ‘embedded with human prejudice and discrimination … at the levels of procedure, prediction and logic’ (Chun Citation2021, p. 16). Yet what is significant about Serres’ millennial prototype, ‘Thumbelina’, is how she combines algorithmic thinking with ‘an innovative and enduring intuition’ (Citation2015, p. 19). Precisely because she no longer has to dedicate so much mental energy and neural capacity to gathering, storing, and organizing information, this ‘new human’ may cultivate a more intuitive mode of engagement attuned to the visceral experience and flow of everyday life. The emergence of ‘an authentic cognitive subjectivity’ (19) which sutures human and machine modes of sensibility, perception, and thought raises important questions concerning what ‘human nature’ can now be said to entail (see Pedwell Citation2019).

If such interventions consider how computational technologies may be radically re-mediating ‘the human’, others argue that what is most significant about current algorithmic systems is their disregard for anthropocentric processes and experiences. Discussions of ‘algorithmic culture’ address, for instance, how the use of statistical approaches such as Single Value Decomposition (a factoring technique within linear Algebra), have enabled entertainment platforms like Netflix to intuit subtle human behaviours and latent correlations which operate ‘beyond human perception, language, and sense-making’ in order to optimize their recommendation algorithms (Hallinan and Striphas Citation2016, p. 125). From this perspective, machine learning innovations which endow AI with greater intuitive flexibility and generalizability do not seek to simulate human sensory, cognitive, or perceptual functions; instead, they hone computational capacities that may be incommensurable with them and, as such, entail ‘inexperiencable experience’ (Chun Citation2016, p. 55; see also Hansen Citation2015). Artificial intuition accordingly elaborates ‘visual information that humans cannot even receive or perceive’ and constructs ‘representations that are more relevant than those that any human computer could have identified’ (Fazi Citation2020, p. 12) – while operating within durations (Bergson Citation1903) outside of human time, space, or sense perception.

This is why some computational media scholars insist that there is no necessary or direct relationship between algorithmic thought – or, in our case, artificial intuition – and human aptitudes or affects. As Luciana Parisi puts it, ‘soft(ware) thought’ is not what ‘affords the mind new capacities to order and calculate’ or what ‘gives the body new abilities to navigate space’; rather, it involves the automated prehension of infinite data that cannot be fully compressed, comprehended, or sensed by totalities such as ‘the mind’, ‘the machine’ or ‘the body’ (Citation2013: xviii). Thus, if Turing’s imitation game inaugurated a ‘simulative paradigm’ (Fazi Citation2020) for AI in which biological and mechanistic processes of cognition came to be figured comparatively or analogically, and Weiner’s cybernetics proposed a capacity for recursive feedback as what links humans and machines with ‘sense organs’, a new techno-social paradigm is consolidating, in which what constitutes thinking, sensing, or intuiting in machine learning is not expressible in human terms, and algorithmic systems have become too immense, complex, and unwieldy to control via feedback in the way first order cybernetics imagined. What is vital to post-cybernetic logic is, as Patricia Clough articulates, not ‘the reliable relationship between input and output’ (Citation2018, p. 104), but rather the speculative capacity to generate value through leveraging computational uncertainty itself.

But what is the nature of this ‘inhuman’ intuition – and how does it inevitably draw on and become entangled with human life and culture? Within artificial intuition, rules of association are formed on the basis of past knowledge and experience via an algorithmic ‘mapping function associating elements from the knowledge set with the experience set’, which enables recognition of ‘subtle likenesses to cues found in past episodes despite many differences in the current situation’ (Johnny et al. Citation2020, p. 465, 467). To say that machine learning algorithms act intuitively when they engage speculatively with their data environments, then, is to describe probabilistic models making rapid decisions about classification amidst noise, clutter, and occlusions (Amoore Citation2020), which, in turn, provide feedback used by an algorithm to recursively adapt its parameters of recognition. To refer to machine learning as intuitive, however, is also to foreground the capacity of neural networks to form complex correlations across large sets of data in real-time, enabling anticipation of associated behaviours to be tracked, amplified, and/or optimized. Increasingly drawing on sensory, biological, physiological, and behavioural data to inform their speculative action, these computational architectures participate in contemporary forms of ‘ontopower’ (Massumi Citation2015) as they elicit and order emergence: the flow of experience, possibility, and becoming in the world.

Of course, whether or not a person, thing, or event can be recognized within a given machine learning programme at all ‘depends on what the algorithm has been exposed to in the world’ (Amoore Citation2020, p. 73). The regimes of recognition underlying artificial intuition are thus ones premised on the politics of exposure, amenability to categorization, and the temporal logic of precomputation. To precompute is, as Amoore argues, ‘to already be able to recognize the attributes of something in advance … to anticipate every encounter with a new subject or object, a new tumour or terrorist, by virtue of its proximity to or distance from a nearest neighbour’ (Citation2020, p. 79). If Bergsonian intuition connects experience with experiment to remain open to the immanent possibility of ‘the virtual’ and ‘the new’, important questions arise regarding artificial intuition’s capacity to extend affirmatively towards (or indeed to supress or circumvent) multiplicity, mutation, and ‘the as-yet-unseen, the as-yet-unthought, the as-yet-unfelt’ (Manning Citation2016, p. 23). In the earlier example concerning Netflix’s use of Algebraic factoring techniques, for instance, what is at stake is ‘how to moderate elements of the cultural field that may present themselves as typical or outstanding, so that they can be led to make sense relative to other, more even-keeled examples’ – an approach that may create ‘a closed commercial loop in which culture confirms to, more than it confronts, its users’ (Hallinan and Striphas Citation2016, p. 122). While the objective of data-driven media platforms is to produce novelty by processing information usually discarded as noise (Clough et al. Citation2015), this ‘novelty’ is, some argue, inevitably the product of things that have already been seen and experiences that have already happened.

These concerns about how, and to what ends, machine learning intuition deals with emergent difference are, of course, compounded by the ubiquitous folding of algorithms into profit-driven platforms. Amidst the consolidation of what Shoshana Zuboff calls ‘surveillance capitalism’, a wide range of corporate and public actors now mobilize algorithmic architectures to claim ‘human experience as free raw material for translation into behavioural data’ (Citation2019, p. 8; see also Andrejevic Citation2013). Significantly, these computational practices are not merely anticipatory; they actively ‘nudge, coax, tune, and herd behaviour towards profitable outcomes’ (8). In 2016, for example, Facebook unveiled FBLearnerFlow, an AI-enabled ‘prediction engine’ which offers advertizers the ability to target users on the basis of ‘how they will behave, what they will buy, and what they will think’, using data related to ‘location, device information, Wi-Fi network details, video usage, affinities, and details of friendships’ (Biddle Citation2018: online). As the technology journalist Sam Biddle notes, this kind of targeted advertising is not dissimilar to Cambridge Analytica’s infamous ‘psychographic’ profiling of voters to intervene in the 2016 American Election and the UK’s 2016 ‘Brexit’ referendum. Yet, if political consultancies are limited to ‘the data they can extract from Facebook’s public interfaces, Facebook is sitting on the motherload, with unfettered access to staggering databases of behaviour and preferences’ (Biddle Citation2018: online). Seeking to translate all human affect and action into potential data points for the generation of capital or political gain, these corporate algorithmic systems immanently shape individual and collective behaviour, while reifying reductive emotional typologies and narrow parameters of cultural value.

If intuition is, as Berlant suggested, ‘a trained thing’, what does it mean for the very ‘conditions of the intelligible and sensible’ (Bucher Citation2018, p. 3) to be increasingly constituted by software and algorithms thoroughly enmeshed with capital and other dominant political interests? Within affective atmospheres animated by the capitalist logics of precomputation, speculative hopes for radical socio-political and cultural transformation may increasingly give way to disaffection, resignation, or alienation. As Marshall McLuhan argued from the vantage point of his own historical moment, ‘Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit from taking a lease on our eyes and ears and nerves, we don’t really have any rights left’ (Citation1964, p. 15).

Conclusions: ‘an imaginative forward glance at history’

In his blueprint for the universal machine, Turing (Citation1936) demonstrated that ‘computable numbers’ indicated the existence of ‘the incomputable’. In doing so, he famously articulated ‘the halting problem’: that a general algorithm will never exist which can determine in advance whether an arbitrary computer programme will finish running or continue on forever. Some media scholars have linked the logical inconsistencies of logical systems embraced by cybernetic frameworks to an assemblage of conditions enabling ‘the hostility, stupidity, and error-prone nature of uncritical mimicking technologies’ (Sampson Citation2020, p. 20; see also Chun Citation2021). Yet for other thinkers the possibility of different (and better) social futures within algorithmic life lies precisely in the speculative promise of this computational undecidability. Hayles argues, to this effect, that Turing’s vital legacy has been to illustrate the generative limits of pre-emptive control: ‘the more control is codified and extended through computational media, the more apparent it becomes that control can never be complete’ (Citation2017, p. 203). Consequently, precomputation is never all-encompassing; errors and contingencies can, at key moments, alter a recursive system so that it ‘begins to operate in new and unanticipated modes’ (197) – whether this concerns ‘affective capitalism’ or other ‘entrenched hierarchies of privilege and the institutionalized racisms associated with them’ (193). What the cybernetic concept of the black box signifies, in this view, is ‘the unknowable’ within computation itself; and it is this elemental indeterminacy that constitutes ‘the utopian potential of cognitive assemblages’ (188-89, 202) – a kind of return, perhaps, to Plato’s view of intuition as unlocking ‘the eternal’.

Recognizing the extent to which unknowability and contingency characterize contemporary algorithmic architectures is important for understanding how these systems could operate otherwise, for how recursivity could be opened ‘up to many different and thus transversal epistemologies’ (Parisi and Dixon-Romàn Citation2020: np). And yet, overinvesting in the potential of computational indeterminacy to deliver wider socio-political change risks reifying a different (and arguably less philosophically generative) kind of black box (Latour Citation1995, Bucher Citation2018) – one, that is, which obscures the ongoing more-than-human collaborations via which machine learning intuition both emerges and shapes emergence itself. As cybernetics, STS, actor-network-theories, affect studies, and ecological media scholarship have long illustrated, beneath any technology presented as ‘artificial’, ‘automatic’, or ‘autonomous’ are dense networks of sensory, material, cultural, and socio-technical relations.

From this perspective, confronting the sensorial, socio-political, and ethical issues algorithmic modes of thinking and sensing raise requires acknowledging that ‘all modes of autonomy are acquired affectively and relationally’ (Wilson Citation2010, p. 85) – and attending to the human-algorithmic entanglements central to machine learning intuition. We might consider, in this vein, Amoore’s discussion of surgical robotics in Cloud Ethics: Algorithms and the Attributes of Ourselves and Others (Citation2020). Contemporary surgical systems such as Intuitive Surgical’s da Vinci robot, Amoore suggests, illuminate not only the speculative links between intuition and mathematics that Turing articulated, but also the fundamentally relational and contingent nature of ‘autonomous’ AI. Distributed, collaborative, and always unfolding, the more-than-human intuition at play here senses its way towards choices and outcomes through overlaying and intermeshing human, superhuman, and inhuman durations. Working experimentally with mass quantities of data at an inhuman scale and speed, the algorithms extract the features of movement from surgical gestures to hone ‘the spatial trajectory of the act of suturing flesh’ (Citation2020, p. 59). In turn, the embodied sensing, navigation, decision-making, and action engaged in by the surgeons is actively shaped by ‘algorithmic judgements, assumptions, thresholds and probabilities’ (64) – though never in ways fully transparent to human understanding. We might say, then, that this algorithmically-mediated intuition extends early visions of human-machine co-constitution and philosophical unknowability offered by cybernetics and AI, while addressing reductive and disembodied computational models of mind associated with these paradigms. Without reifying human exceptionality, it foregrounds the necessary relations human systems have with ‘nonhuman knowledges, objects and routines’ and highlights the imbrication of ‘digital and analogue modalities of thought and feeling’ (Wilson Citation2010, p. 91) vital to contemporary algorithmic life.

Together, these elements underscore a key claim emerging from the partial post-war genealogy this paper has unfolded: in illuminating more-than-human processes of distributed cognition and relational affect, intuition is generative precisely for how it troubles lingering investments in the bounded, autonomous, and intentional human as well as articulations of human-machine interaction that fail to address the fully imbricated and yet asymmetrical qualities of such relations. Indeed, ‘relationality’ and ‘entanglement’ here do not signify comparability or commensurability – humans and algorithms engage in radically different operations across divergent temporalities and spatialities, which nonetheless interact to produce particular worldly possibilities and outcomes. Intuition thus lies at the heart of an ‘originary technicity’ in which ‘the human’s early relationship to the liveness of the nonhuman never really comes to an end’ (Clough Citation2018: xxxi) – and homology or analogy matter less than the ‘imbrication of relations’ (Sampson in Hayles and Sampson Citation2018, p. 77). A vital ongoing challenge, then, is how to approach complex political and ethical quandaries wherein the primary unit of investigation is neither ‘ir(responsible) human’ nor ‘errant machine’ but instead emergent ‘human-algorithm composite’ – within affective atmospheres in which intuition ‘never meaningfully belonged to a unified “I” who thinks’ (Amoore Citation2020, p. 67), and indeterminacy is not only computational but also affective, psychic, socio-political, cultural, economic, and ecological.

While first order cybernetics may now appear outmoded or irrelevant given the complexities of current machine learning ecologies, Weiner’s vision of the intuitive and speculative forms of attunement required for an ethics of AI remains salient. Scientists, Weiner notes, ‘must work as part of a process whose timescale is so long that [we] can only contemplate a very limited sector of it’ (Citation1960, p. 88). Even when we believe ‘that science contributes to the human ends which [we have] at heart, [our] belief needs a continual scanning and re-evaluation which is only partly possible’ ­– a recursive process of feedback amid uncertainty that ‘requires an imaginative forward glance at history which is difficult, exacting, and only limitedly achievable’ (Citation1960, p. 88). Far from assuming that ‘the faster we rush ahead to employ the new powers for action which are opened up to us, the better it will be’, Weiner urges us to ‘exert the full strength of our imagination to examine where the full use of our new modalities may lead us’ (88).

Written in the aftermath of WWII and amid the escalation of the Cold War, Weiner’s phrase ‘an imaginative forward glance at history’ feels significant todayFootnote6: on one hand, it enlists our collective intuitive capacity to speculate possible techno-social futures that may, over time, become history; on the other hand, it compels us to return repeatedly to ‘the past’ to sense how it remains live and unfinished within our unfolding computational present. Through the mediating lens of current algorithmic structures of feeling, Weiner’s vision also speaks, I want to suggest, to the recursive nature of all forms of intuition – and to the role such immanent intuitional multiplicity might play in enabling us to affectively inhabit the fundamental ambivalence of technological ‘progress’ – while appreciating how many elements of contemporary algorithmic cultures are not immediately amenable to human sensibility and that, as such, experience ‘is simply not what it used to be’ (Hansen Citation2015, p. 23).

If the ‘cybernetic fold’ of the 1950s offered a moment of expansive imagination concerning computational possibilities at the cusp of being actualized (Sedgwick and Frank Citation1995), and the 1980s was a time of ‘relative innocence’ in which anxieties about intelligent machines melded with ‘heady dreams of a global village’ (Turkle Citation2004, p. 298), what do today’s pre-computational affects signal, portend, or open up? One answer to this question may lie in Bergson’s call to go beyond sensorial experience itself to explore that which conditions it. From this perspective, what is at stake now is not only the role of artificial intuition in the processual logics of environmental ‘control’ as a technology of state and corporate power, but also the environmental resources current algorithmic architectures offer for the cultivation of future more-than-human intuitions and forms of political and ethical sensibility. What kind of collective, distributed, and recursive intuition are we training in ourselves and others – and to what speculative ends? In turn, what possibilities exist to re-align and re-animate shared practices of anticipation to enable more affirmative, inclusive, and socially just processes of socio-political and cultural collaboration and transformation?

Acknowledgements

This work was supported by the Leverhulme Trust under grant RF-2020-005\8: ‘Digital Media and the Human: The Social Life of Software, AI and Algorithms’. Excellent research assistance by Sophie Rowlands and Ames Clark was fundamental to this project. Many thanks to the Editors and the anonymous reviewers for their incisive feedback on the article, which improved this piece in significant ways. My gratitude also goes to Greg Seigworth, Beckie Coleman, and Dawn Lyon for their valuable insight and encouragement throughout this project.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by The Leverhulme Research Fellowship, RF-2020-005\8: ‘Digital Media and the Human: The Social Life of Software, AI and Algorithms'.

Notes on contributors

Carolyn Pedwell

Carolyn Pedwell is Professor of Media and Cultural Studies at the University of Kent. She is the author of three monographs, Revolutionary Routines: The Habits of Social Transformation (McGill-Queens UP, 2021), Affective Relations: The Transnational Politics of Empathy (Palgrave, 2014), and Feminism, Culture and Embodied Practice (Routledge, 2010). Carolyn is also co-editor (with Gregory J. Seigworth) of The Affect Theory Reader II: Worldings, Tensions, Futures (Duke UP, 2023).

Notes

1 See account in Hodges, Citation1983.

2 See Sampson (Citationforthcoming) for an insightful discussion of important differences within media studies, STS, new materialisms, and speculative philosophy with respect to how, exactly, a collective ‘cognitive nonconscious’ is conceptualized.

3 See Suchman, Citation2011, Citation2019; Clough, Citation2018; Parisi and Dixon-Romàn, Citation2020; Chun, Citation2021.

4 See, for example, Hodges, Citation1983.

5 On algorithmic (infra)structures of feeling see Coleman, Citation2017; Bucher, Citation2017, Citation2018.

6 We might, of course, note the partial resonance here with Walter Benjamin’s ‘angel of history’.

References

  • Amoore, L., 2013. The politics of possibility: risk and security beyond probability. London and Durham: Duke UP.
  • Amoore, L., 2020. Cloud ethics: algorithms and the attributes of ourselves and others. London and Durham: Duke UP.
  • Andrejevic, M., 2013. Infoglut: how too much information is changing the way we think and know. London: Routledge.
  • Bergson, H., [1889]2015. Time and free will: an essay on the immediate data of consciousness. Eastford, CT: Marino Fine Books.
  • Bergson, H., [1896]1991. Matter and memory. Cambridge, MA: MIT Press.
  • Bergson, H., [1903]1912. An introduction to metaphysics. Trans. T.E. Hulme. New York: The Knickerbocker Press.
  • Berlant, L., 2011. Cruel optimism. London and Durham: Duke University Press.
  • Biddle, S. 2018. Facebook uses artificial intelligence to predict your future actions for advertisers, says confidential document, The Intercept, 13 June.
  • Bucher, T., 2017. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, communication & society, 20 (1), 30–44.
  • Bucher, T., 2018. If … then: algorithmic power and politics. Oxford: Oxford UP.
  • Butler, J., [1990]1999. Gender trouble: feminism and the subversion of identity. London and New York: Routledge.
  • Collins, H., 2010. Tacit and explicit knowledge. Chicago: University of Chicago Press.
  • Chudnoff, E., 2013. Intuition. Oxford: Oxford University Press.
  • Chun, W.H., 2016. Updating to remain the same: habitual new media. Cambridge, MA: MIT press.
  • Chun, W.H., 2021. Discriminating data: correlation, neighbourhoods, and the new politics of recognition. Cambridge: MIT press.
  • Clough, P.T., 2018. The user unconscious: on affect, media and measure. Minneapolis: University of Minnesota Press.
  • Clough, P.T., et al., 2015. The datalogical turn. In: P. Vannini, ed. Non-representational methodologies: re-envisioning research. London and New York: Routledge, 146–164.
  • Coleman, F., et al., 2018. The ethics of coding: a report on the algorithmic condition. Brussels: European Commission.
  • Coleman, R., 2008. A method of intuition: becoming, relationality, ethics. History of the human sciences, 21 (4), 104–123.
  • Coleman, R., 2017. Theorizing the present: digital media, pre-emergence and infra-structures of feeling’. Cultural studies, 32 (4), 600–622.
  • Conway, F., and Seigelman, J., 2005. Dark hero of the information age: in search of Norbert Weiner the father of cybernetics. New York: Basic Books.
  • Dreyfus, H., and Dreyfus, S., [1985]1988. Mind over machine: the power of human intuition and expertise in the era of the computer. New York: The Free Press.
  • Englebart, D., 1963. A conceptual framework for the augmentation of man’s intellect. In: P. Howerton, and D. Weeks, eds. Vistas in information handling volume 1: the augmentation of man’s intellect by machine. Washington, D.C.: Spartan Books, 1–29.
  • Eubanks, V., 2017. Automating inequality: how high-tech tools profile, police, and punish the poor. New York: St. Martins Press.
  • Fazi, B. 2020. Beyond human: deep learning, explainability and representation. Theory, culture and society, advance proof: 1–23.
  • Finn, E., 2015. What algorithms want: imagination in the age of computing. Cambridge: MIT Press.
  • Fussell, S. 2018. Alexa wants to know how you’re feeling today. The Atlantic, 12 October. https://www.theatlantic.com/technology/archive/2018/10/alexa-emotion-detection-ai-surveillance/572884/.
  • Galison, P., 1994. The ontology of the enemy: Norbert Weiner and the cybernetic Vision. Critical inquiry, 21 (1), 228–266.
  • Galloway, A., 2006. Gaming: essays on algorithmic culture. Minneapolis: University of Minnesota Press.
  • Hallinan, B., and Striphas, T., 2016. Recommended for you: the Netflix prize and the production of algorithmic culture. New media & society, 18 (1), 117–137.
  • Hansen, M., 2015. Feed-forward: on the future of twenty-first century media. Chicago: Chicago UP.
  • Hayles, N.K., 1999. How We became posthuman: virtual bodies in cybernetics, literature and informatics. Chicago: Chicago UP.
  • Hayles, K.N., 2017. Unthought: the power of the cognitive unconscious. Chicago: Chicago UP.
  • Hayles, N. K. and Sampson, T. D. 2018. Unthought meets the assemblage brain: a dialogue between N. Katherine Hayles and Tony D. Sampson’, Capacious 1(2): 60–84.
  • Hodges, A., [1983]2014. Alan Turing: the Enigma. London: Vintage.
  • Johanssen, J., and Wang, X., 2021. Artificial intuition in tech journalism on AI: imagining the human subject. Human-Machine communication, 2, 173–190.
  • Johnny, O., Trovati, M., and Ray, R., 2020. Towards a computational model of artificial intuition and decision making. In: L. Barolli, H. Nishino, and H. Miwa, eds. Advances in intelligent networking and collaborative systems. INCoS 2019. Advances in intelligent systems and computing, vol 1035. Cham: Springer, 463–472.
  • Latour, B., 1995. Reassembling the social: an introduction to actor-network-theory. Oxford: Oxford UP.
  • Lundy, C., 2018. Deleuze’s Bergsonism. Edinburgh: Edinburgh UP.
  • Manning, E., 2016. The minor gesture. Durham and London: Duke UP.
  • Massumi, B., 2015. Ontopower: war, powers and the state of perception. Durham and London: Duke UP.
  • McCulloch, W. and Pitts, W. 1943. ‘A logical calculus of the ideas immanent in nervous activity’, The bulletin of mathematical biophysics 5:115–133.
  • McKenzie, A., 2017. Machine learners: archaeology of a data practice. Cambridge, MA: MIT Press.
  • McLuhan, M., [1964]1994. Understanding media: the extensions of Man. Cambridge, MA: MIT Press.
  • Meyers, D.G., 2002. Intuition: its powers and perils. New Haven and London: Yale UP.
  • Parisi, L., 2013. Contagious architecture: computation, aesthetics, and space. Cambridge: MIT Press.
  • Parisi, L., 2019. Critical computation: digital automata and general artificial thinking. Theory, culture & society, 36 (2), 89–121.
  • Parisi, L., and Dixon-Román, E. 2020. Recursive colonialism and cosmo-computation. Social Text, Periscope. https://socialtextjournal.org/periscope_article/recursive-colonialism-and-cosmo-computation/.
  • Pedwell, C., 2019. Digital tendencies: intuition, algorithmic thought and new social movements’. Culture, theory and critique, 60 (2), 123–138.
  • Pedwell, C., 2021a. Revolutionary routines: the habits of social transformation. Montreal: McGill-Queen’s University Press.
  • Pedwell, C., 2021b. Re-mediating the human: habits in the age of computational media. In: T. Bennett, B. Dibley, G. Hawkins, and G. Noble, eds. Assembling and governing habits. London: Routledge, 62–78.
  • Pickering, A., 2010. The cybernetic brain: sketches of another future. Chicago: Chicago UP.
  • Rheingold, H., 1985. Tools for thought: the history and future of a mind-expanding technology. Cambridge, MA: MIT Press.
  • Sampson, T.D., 2020. A sleepwalker’s guide to social media. Cambridge: Polity Press.
  • Sampson, T.D., forthcoming. Nonconscious affect: cognitive, embodied or nonbifurcated experience. In: G. Seigworth, and C. Pedwell, eds. The affect theory reader II: worldings, tensions, futures. Durham and London: Duke University Press.
  • Sedgwick, E., and Frank, A., 1995. Shame in the cybernetic fold: Reading Silvan Tomkins. Critical inquiry, 21 (2), 496–522.
  • Seigworth, G.J., 2006. Cultural studies and gilles deleuze. In: G. Hall, and C. Birchall, eds. New cultural studies: adventures in theory. Edinburgh: Edinburgh University Press, 107–126.
  • Serres, M., 2015. Thumbelina: the culture and technology of millennials. Trans. D. W. Smith. London and New York: Rowman and Littlefield.
  • Simon, H., and Chase, W., 1973. Skill in chess: experiments with chess-playing tasks and computer simulation of skilled performance throw light on some human perceptual and memory processes. American scientist, 61 (4), 394–403.
  • Smith, B., and Shum, H., 2018. The future computed. New York: Microsoft.
  • Striphas, T., 2015. Algorithmic culture. European journal of cultural studies, 18 (4–5), 395–412.
  • Suchman, L., 2007. Human-machine reconfigurations: plans and situated actions. 2nd ed. Cambridge: Cambridge UP.
  • Suchman, L., 2011. Subject objects. Feminist theory, 12 (2), 119–145.
  • Suchman, L., 2019. Demystifying the intelligent machine. In: T. Heffernan, ed. Cyborg futures: cross-disciplinary perspectives on artificial intelligence and robotics. Houndsmill: Palgrave Macmillan, 35–62.
  • Turing, A., [1936]1937. On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London mathematical society, 42, 230–265.
  • Turing, A., 1950. Computing machinery and intelligence. Mind, new series, 59 (236), 433–460.
  • Turkle, S., [1984]2004. The second self: computers and the human spirit. Twentieth Anniversary Edition. Cambridge, MA: MIT Press.
  • Turkle, S., 1995. Life on the screen: identity in the age of the internet. New York: Simon & Schuster.
  • Wiener, N., [1948]2013. Cybernetics or, control and communication in the animal and machine. 2nd ed. Mansfield Centre, CT: Martino Publishing.
  • Wiener, N., [1950]1954. The human use of human beings: cybernetics and society. Boston, MA: Da Capo Press.
  • Wiener, N., 1960. Some moral and technical consequences of automation. Science, 131, 1355–1358.
  • Williams, R., 1977. Marxism and literature. Oxford: Oxford UP.
  • Wilson, E.A., 2010. Affect & artificial intelligence. Seattle and London: University of Washington Press.
  • Zuboff, S., 2019. The age of surveillance capitalism: the fight for a human future at the new frontier of power. London: Profile Books.