ABSTRACT
This paper examines the aurality of voice-activated AIs (VA AIs) through uncanny encounters with the devices. Glitches such as Alexa spontaneously bursting into laughter or accidental activations putting users’ privacy at risk, have incited suspicions among users as to the motives of tech companies and the technical capabilities of their devices. Building on previous contributions that show that the feminised voices of VA AIs are strategically designed to display reassuring attributes and obscure surveillance practices, I discuss the aural moments in which VA AIs fail to reassure, shifting from a convenience to a threat through the experience of the uncanny. I argue that anxieties tied to VA AIs are both produced and mediated by their aurality: both their voices and listening practices. I theorise uncanny encounters with the voice and listening capacities of VA AIs such as glitches, features which seek to imitate humans, disembodied voices and disembodied listening, and invasions of privacy as enacting perversions of care and inducing fear of impersonation and intrusion. This paper contributes to the literature on the specificity of sound in conceptions of the uncanny valley, and also seeks to enrich conception of vocality and listening as vector of anxieties within the neoliberal condition.
Acknowledgments
This work is greatly indebted to George Lewis’s teachings, to Aaron Fox for having read and edited early versions of this paper, and to my brilliant colleagues for sharing their thoughts: Julia Hamilton, Velia Ivanova, Sonja Wermajer, and Russell O’Rourke. Thank you to the Cambridge’s department of Earth and Sciences for providing a peaceful space where I wrote most of this paper while visiting my sister.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1. See also Baird et al. (Citation2018); Jansen (Citation2019); Karle et al. (Citation2018); Louwerse et al. (Citation2005); Tinwell and Grimshaw (Citation2009).
2. See also Laing (Citation1991) and his discussion of early categorisations of the phonograph as “diabolical” (4).
Additional information
Notes on contributors
Audrey Amsellem
Audrey Amsellem is an ethnomusicologist and Lecturer at Columbia University. Her research interests lie at the intersection of music, law and science and technology studies. Dr. Amsellem’s dissertation, titled “Sound and Surveillance: The Making of the Neoliberal Ear,” investigates non-creative recording practices in the neoliberal age. She is the recipient of the National Science Foundation’s Doctoral Dissertation Research Improvement Grant in Science and Technology Studies, the 2021 SSN Early Career Researcher Award, and is a current member of the Open Voice Network at the Linux Foundation.