1,151
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Refusing participation: hesitations about designing responsible patient engagement with artificial intelligence in healthcare

ORCID Icon & ORCID Icon
Article: 2300161 | Received 17 Aug 2023, Accepted 23 Dec 2023, Published online: 29 Jan 2024

ABSTRACT

The rapidly expanding field of artificial intelligence (AI) is often accompanied by calls for parallel research on its societal implications. For research about AI in healthcare, this translates to some form of patient engagement. In this article, we question whether patient engagement and participation really contribute to responsible AI. We first summarise existing critiques of patient participation. We review the critiques of the critiques, themselves motivated by the wish to contribute, and not to leave the field solely to computer- and data scientists. In the final section, we express our doubts about the possibilities for developing positive, generative interventions, and explore ‘refusal’ and ‘hesitation’ as forms of critique and engagement. The conclusion presents a checklist for refusing patient participation, an addition to the growing repertoire of tools for patient participation and responsible innovation. The article draws on and contributes to the STS tradition of creative and speculative writing.

This article is part of the following collections:
Critique in, for, with, and of Responsible Innovation

Introduction

28 August 2022, Dear Flora,

How are we going to handle our own promises to the Dutch Research Council? Our project partners have already finalized their part of the research, and interviewed pathologists and radiologists about what they think about AI. They even managed to do that during the height of the pandemic (Drogt et al. Citation2022). One of our tasks is to find out what patients think about the use of AI in their care. We’ve certainly thought a lot about this, worked with students on various methodological experiments, and reviewed the ever-growing literature on patient attitudes to health-related AI. Alongside the investment in AI, there are also many calls to find out what people think about the use of AI not only in healthcare but in all sorts of other domains. Sometimes this is motivated by a deep-seated commitment to participation as a form of emancipation, but sometimes it is clearly at best tokenistic and at worst a form of legitimation or participation washing. We know enough STS to understand how these promissory discourses work, not only of emerging technologies themselves but also those of critical AI researchers. I worry that this particular promise might come back to bite us. We can provide a critique, and we can point out that the number of really functioning AIs in our fields of interest are practically nil. How can we fulfil our obligations to our funders and our project partners, and at the same time do justice to our own ethical and political commitments?

This article is our response to those questions and concerns. Our experimental contributionFootnote1 explores the possibility of an alternative type of critique of notions and practices of ‘participation,’ taking recent calls for patient engagement with AI (artificial intelligence) in healthcare as our main example. Our article is fuelled by a question central to the responsible innovation (RI) literature: how to act responsibly in light of long-standing critiques of the very action you are about to undertake? In our case, how to conduct responsible research of patient participation with AI, taking into account established as well as more recent critiques of patient participation, as well as critiques of the reported shortcomings and dangers of AI-supported technologies in healthcare? How should academic researchers act responsibly to calls by policy makers and funders to initiate new forms of patient participation with AI developments?

There is an urgent need to examine the position of calls for patient engagement as part of a societal discussion on the emergence of AI. Promissory statements about an impending AI revolution are everywhere (Katzenbach and Bareis Citation2022), and the healthcare domain is currently AI’s biggest investment space (AI Index Steering Committee et al. Citation2023). But there is increasing evidence of harmful consequences of AI-supported tools in healthcare (Eubanks Citation2018; Obermeyer et al. Citation2019; Seyyed-Kalantari et al. Citation2021; “AI, Algorithmic and Automation Incidents and Controversies Repository” Citation2022), such as potential faulty diagnosis, incorrect treatments and unjust disparities in care outcomes. As part of mitigating these risks, numerous recent academic articles, policy documents and white papers on big data, machine learning and AI mention the importance of ‘patient engagement’, ‘patient participation’ or ‘patient representation’ in realising responsible governance and effective guidance for the implementation of AI in healthcare (Academy of Medical Sciences Citation2019, 2; High-Level Expert Group on AI Citation2019, 23; World Health Organisation Citation2021, 29). However, it is unclear how this enthusiasm about patient engagement practices relates to abundant sceptical and negative evaluations of participation practices as well as recent critical views on (medical) AI. A number of scholars have shifted focus to what have been dubbed ‘left critiques’ of AI (Aradau and Bunz Citation2022) that centre harmful relations of labour and power in its production (cf. Kind Citation2020), including a turn towards ‘health data justice’ (Shaw and Sekalala Citation2023). It remains unclear to what extent such problematisations have informed public policy and the agenda of participation, where discussions on technical solutions to AI-harms seem to dominate public media (Katzenbach et al. Citation2023). Moreover, though the number of academic studies on ‘patient attitudes’ towards AI by means of surveys and interviews is gradually rising (Young et al. Citation2021), investigations of responsible AI may benefit from alternative perspectives on patient engagement developed in the field of RI and STS (science and technology studies).

To address the question of how to be responsive to longstanding critiques of participation in the context of new AI developments in healthcare, we combine a conventional academic text that synthesises literature in the field of participation and patient perspectives on AI with an email dialogue between the two of us, as scholars involved in an emerging research field. This dialogue exhibits our unfolding concerns about the normative aspects of conducting a participation project, drawing on our own experiences of a ‘participatory imperative’ as part of a four-year, funded project called RAIDIO (Responsible Artificial Intelligence in Clinical Decision Making) which focuses particularly on new AI-developments in the domain of medical imaging in radiology and pathology.Footnote2 Sally Wyatt is one of the project’s principal investigators and was involved in preparing the proposal in 2019. Flora Lysen joined as a postdoctoral researcher in August 2020.

Each of the following sections includes part of an email exchange between the two of us. The first section begins with a snippet of the original grant proposal for this project in which we promised patient engagement and provided a brief research outline. The article proceeds in three steps. First, we typify established critiques of patient participation as well as AI developments, which set the stage for a hesitation to corroborate existing repertoires and infrastructures. Second, we reflect on the critique of this critique, and on proposals to move beyond an impasse. Third, we express doubts about how to create a generative intervention, exploring notions of refusals – ‘saying no’ or ‘saying not like this’ – as an alternative form of critique. Instead of a conclusion, we present a checklist of ‘refusal as action and method’ as an alternative to the growing repertoire of tools and methodologies of engagement.

This somewhat experimental format is meant to examine and express ‘hesitation’ (forms of doubt that cause delay, stalling and friction) as a possible form of critique in the charged context of (engagement with) AI developments. We have taken inspiration from a long tradition of creative writing and speculation in STS (Haraway Citation2016; Latour Citation1996; Maguire, Watts, and Winthereik Citation2021; Woolgar et al. Citation2021). Our voices as doubting researchers are presented in order to make transparent some of the tacit epistemic structures (such as hierarchies between disciplines, conventions in doing research, subtleties of interpersonal relations) that allow for some critical positions and voices, but not others. Our contribution is meant as a provocative response to current calls for reflexive collaborative experimentation (including ‘critical participation’ experiments) as foregrounded by scholarly work in RI and related fields. The central conundrum of ‘critique’, in these discussions, pivots around the difficult balancing act of being implicated as researchers in potentially harmful societal structures and the shaping of particular problem-spaces, while at the same time being committed to change (Balmer et al. Citation2016; Martin Citation2022). A dominant approach in the fields of RI and STS has been to emphasise that scholars ‘can contribute to making things, to changing the world. In doing so, they inevitably will dirty their hands, for there is no free ride here’ (Bijker Citation2003, cited in Marris and Calvert Citation2020, 40). To negotiate critical insights while getting their hands dirty, a number of scholars have pointed to reflexivity on forms of complicity as a new possibility or positionality for critique (cf. Hollin and Williams Citation2022). Our contribution defiantly explores a different reaction, tentatively taking a route from ‘optimism to pessimism’ (Coad et al. Citation2021) when it comes to possibilities for invited critical patient participation in the field of AI today.

Longstanding and novel critiques: participation in the development of AI in healthcare

September 2019, RAIDIO funding proposal

Research objective 5 – to explore how patients, technology producers, and medical professionals perceive the role of AI in clinical decision making in the field of radiology and pathology.

Methodology: We will conduct in-depth, semi-structured interviews with medical practitioners, patients and producers of these tools [different AI applications] to map their needs, opinions and preferences … and to determine what they consider responsible development and use of AI in medical decision making … (interview patients and caregivers from the Cancer Patient Council at the *hospital, (n = 10), to be contacted via patient council) [italics added by authors]

5 October 2022, Dear Sally,

Apologies for my late response. You are right, we should not stall the patient participation deliverable any further. I had another look at the proposal, written back in 2019. We tentatively suggested interviewing ten patients about their perceptions and attitudes towards AI-applications in (image-based) medicine. In the text, we mention the importance of the patient perspective several times: patients are ‘stakeholders’ in their interaction with AI, who should be involved because, ‘(b)y including all relevant stakeholders the applications of AI in medicine can be steered into desirable directions by aligning ethical guidelines with the views and needs of those who will directly be affected by it’. Looking at our proposal today, I have become rather hesitant about our suggested approach. This hesitation is fuelled by reading literature in the field of patient participation and recent commentaries on ethics and AI research. Two critical remarks stood out to me. First, the observation by Mary Madden and Ewan Speed that mainstream patient and public involvement (PPI) efforts often constitute a form of ‘busywork’, i.e. ‘a time-consuming technocratic distraction’ that replaces a politics of social movements by discourses of managerialism (Citation2017, 2). Second, a recent book called ‘Resisting AI’ warns against ‘watered-down forms of engagement’ with AI, such as citizen juries, which superficially look like democratic deliberation but may actually obscure important decisions about AI that are outside the purview of the engagement situation (McQuillan Citation2022, 128). These statements alert us to two converging strands of critique – critique of patient participation and critique of engagement with AI. Taken together, they should make us extremely cautious when we conduct research that involves ‘asking patients about AI’. (How) should we move forward with our promised plan of patient participation?

Researching patient participation with ‘emerging’ technologies such as AI-supported medical applications is not straightforward, since many developments are presently in an (early) development phase, moving from data and computer science environments to preliminary explorations in clinical environments. Even though citizens may experience little ‘direct’ interfacing with AI-supported health tools, AI is gradually affecting patients’ healthcare as part of complex, distributed sociotechnical healthcare systems. Within these systems, AI takes on many different forms, including AI-assisted calculation of patients’ risk profiles (prognosis) and eligibility for operations (triage) as part of clinical decision-support systems; calibrating medical imaging machines and diagnostic assessment of patients’ digital scans and tissue slides (diagnosis); and making of medical reports with automated speech recognition and software for appointment scheduling (management). A recent European Commission White Paper has deemed medical AI a ‘high-risk’ application, with potential threats ranging from inaccurate diagnosis and incorrect treatments (which may be unequally distributed over different groups of patients) to undesired identification of individual patients, delays in treatment and diminished choice of care (European Patients Forum Citation2020). To address these risks, calls for including patients in AI research and development are gaining momentum.

Within the expanding discourse on responsible (medical) AI, what is meant by fostering inclusion, representativeness, involvement or empowerment of laypeople or patients is often not sharply defined, and remains unreflectively normative (cf. Castro et al. Citation2016; cf. Felt and Fochler Citation2008). In several AI policy documents, for example, a ‘rhetoric of engagement’ is invoked within the context of ‘building trust’ or ‘acceptability’ for AI systems (Wilson Citation2022). Over the past decades, different patient participation formats, such as patient consultation and invited participation events, patient representation in decision-making bodies, research participation, and patients’ co-design of medical procedures and systems, have become entwined with concepts and ideals of participation.Footnote3 Participation has been proposed as a means to enhance the efficacy, sustainability and quality of medical innovations, to increase possibilities for individual patients to choose, to democratise decision making in healthcare, as well to emancipate patients and to foster equitable and just medicine (Del Savio, Buyx, and Prainsack Citation2016; Prainsack Citation2017). However, not all of these participation aims receive equal weight. Over the past twenty years, (policy) discourses on public participation have shifted from an emphasis on democratisation by means of involvement in decision making towards participation in innovation making (Macq, Tancoigne, and Strasser Citation2020). Overall, while participation means many different things in different contexts, it is often presented as a ‘moral obligation’ (Baines et al. Citation2022, 2).

As a result of this general participatory imperative, also in medical AI research, researchers have started to use participation formats to create and examine patient engagement with AI. A small number of studies investigate, for example, forms of co-design by citizens or patients in current trajectories of developing AI in medicine, for instance in developing machine learning models to stratify rheumatology treatments or a decision-support system for diabetes patients (Ayobi et al. Citation2023; Shoop-Worrall et al. Citation2021; Stawarz et al. Citation2023). Predominantly, participation literature takes the form of research on people’s ‘attitudes’ (perspectives, views, opinions, perceptions, etc.) towards emerging or envisioned forms of AI in medicine. Our scoping search generated over 60 articles published in the period 2018-mid-2023. The ‘general public’, ‘citizens’, ‘healthcare consumers’ and ‘users’ are amongst the groups asked about their attitudes. Less frequently, specific groups of care seekers are included, those who may experience aspects of AI-supported care, such as women in the process of breast cancer screening, people with an implanted defibrillator, or respondents to a suicide risk questionnaire. To date, methodologies such as focus groups and semi-structured interviews have rarely been deployed. There is little use of more elaborate scenarios or vignettes to describe AI applications to participants (exceptions include McCradden, Sarker, and Paprica Citation2020; Winter and Carusi Citation2022). Instead, there are many surveys with Likert-scale response possibilities to questions such as ‘Do you think the benefits of using machine learning to analyse medical records to help diagnose patients outweighs the risks?’ (example from Aggarwal et al. Citation2021). The outcomes of these studies cannot be straightforwardly summarised, given their diverse methods and timeframes, sometimes focusing on direct experience and sometimes on imagined futures.

Overall, systematic reviews of the AI-attitudes literature report a conditionally positive attitude by patients towards AI-applications (Wu et al. Citation2023; Young et al. Citation2021). Another consistent finding is that participants prefer to keep some form of human involvement in the diagnostic and care process (Young et al. Citation2021). Taken together, systematic reviews reveal ambivalence. Patients envision both opportunities and risks posed by AI-supported processes, such as a potential increase but also decrease of the accuracy of diagnostic processes, prospects of better but also of worse privacy protection, and the possibility that AI could improve as well as diminish the quality of patient-doctor communication (Wu et al. Citation2023; Young et al. Citation2021). Researchers have also reported a high prevalence of ‘neutral’ or agnostic responses by patients, i.e. answering ‘I don’t know’ to questions (Fritsch et al. Citation2022). Arguably related to this ambiguity, many AI-attitudes articles remain rather equivocal about how the findings of patient attitudes research should inform further research and development. This lack of clarity about the ultimate intention or rationale for participation research (cf. Delgado, Lein Kjølberg, and Wickson Citation2011) is but one example of the criticisms that have previously been levelled by STS- and RI-researchers to participatory research with other emerging technologies, such as genetically modified organisms or nanotechnologies. It is beyond the scope of this article to review the vast body of critique on participatory methods. Key points of critique include the power imbalances in participation projects, which may perpetuate exclusion and marginalisation of certain groups because of unequal accessibility to forms of participation, a disparity between participatory ideals and actual results, and a depoliticisation of patient representation (see Chilvers and Kearnes Citation2016). In what follows, we highlight two critical observations central to the current participatory AI debate, namely the impact of issue framing effects and the ‘invited’ aspects of participation.

Recent research on AI attitudes should be scrutinised with regards to the well-known and poignant critique of the potential ‘issue framing effects’ of participatory exercises: experts’ presentations of concerns and the design of procedures by participation professionals have a potentially obfuscating and limiting impact on who can participate and what issues and concerns can be raised (Welsh and Wynne Citation2013). Voß and Amelung (Citation2016) trace the development of citizen panels and juries since the 1970s, and highlight the central irony of how anti-technocratic efforts to establish democratic control have become incorporated into forms of technoscientific governance.Footnote4 Previous research into citizen engagement with nanotechnologies has demonstrated how emerging technologies are often presented as imminent, i.e. inevitable developments for which society must prepare. Moreover, researchers often frame public issues narrowly, ‘around questions of risk and regulation’ (Delgado, Lein Kjølberg, and Wickson Citation2011). Such risk-oriented framing also results in emphasising the need to educate participating citizens about the technicalities and specificities of these risks as a prerequisite to participate in societal debates. In current AI-attitudes research, this characteristic line of reasoning is visible in recurring remarks about participants’ ignorance or misunderstanding of machine-learning and AI-related applications in healthcare and the need to strengthen patients’ literacy in digital developments (Wu et al. Citation2023; Young et al. Citation2021). Missing from this frame are, for example, discussions with patients about alternative developments and investments in healthcare, about visions of good care as part of a good life, or about the way AI relates to broader societal problems (Delgado, Lein Kjølberg, and Wickson Citation2011).

Closely linked to a critique of the framing effects of participation research are qualms regarding the ‘invited’ aspect of many involvement exercises. Inviting lay participation in scientific and technological matters is a central premise of participation research: experiential knowledge by non-professionals may help to articulate ‘alternative rationalities’ (including patients’ needs and priorities) that might otherwise lie beyond the scope of expert reasoning, and which are envisioned to contribute to improving implementation processes and governance (Bogner Citation2012, 512). However, this ‘invited’ aspect of participation formats also reinforces the already existing power differential between involved publics and healthcare professionals. Consequently, participation exercises risk becoming ‘lab experiments’ or ‘political rituals’ instigated by researchers and policy makers rather than by concerned citizens (Bogner Citation2012; Komporozos-Athanasiou et al. Citation2016). This may result in decontextualised brainstorm exercises that are isolated from patients’ priorities, as well as from political controversies involved in actual AI healthcare developments. For example, in research about ‘attitudes towards AI in health’, patients are rarely prompted to reflect on private corporate investments in AI, the potential downstream consequences of health data platforms, the environmental costs of AI infrastructures, mechanisms of representation of accountability, or inequalities and injustices exacerbated by AI-supported technologies (for exceptions, see European Patients Forum Citation2022; McCradden et al. Citation2020; McCradden, Sarker, and Paprica Citation2020).

Reflecting on these lacunae in addressing the societal harms connected to AI in the context of co-design for AI in health, Joseph Donia and James Shaw note this format is prone to focus on the ‘empowered’ patient who is enabled to co-design a medical device or artefact. This threatens to downplay any reflection on ‘institutional arrangements, technical artefacts, infrastructures, norms and social goals’ that make up the broader political and sociotechnical system of AI development (Citation2021, 7). Because important issues are left unaddressed, invited participation runs the risk of turning into a governance tool to monitor citizens’ opinions about technological changes, with the goals of avoiding conflicts further down the road of development and optimising efficient roll-out of technologies. In this more instrumental vein, patient concerns need to be assessed and addressed, as one research team put it, to prevent a ‘“third AI Winter” in which fears of patient harm lead to a widespread rejection of healthcare AI by patients and their providers’ (Richardson et al. Citation2021, 4). Effectively, participation activities may thus depoliticise patient representation and turn resources and attention away from important other ways of mitigating and governing the risks of health AI, such as patient activism (Beresford Citation2019). Current participation practices with AI in healthcare have predominantly taken the route of ‘upstream’ engagement, i.e. ‘early’ involvement of patients and laypeople at an uncertain phase of technological development. This upstream approach inevitably favours invited participation over engagement with self-organising social actors, i.e. ‘uninvited’ citizens, simply because a technology may not yet have become an issue for a specific group of actors (Delgado, Lein Kjølberg, and Wickson Citation2011). Hence, for researchers who are critical of the current technocratic ‘machinery of participation’, including the problematic consequences of issue-framing effects and invited participation, it is difficult to stay committed to the emancipatory premises of participation (Cowan, Kühlbrandt, and Riazuddin Citation2022), though, as we discuss in the next section, alternative routes may be possible.

Critique of the critique of participation

10 November 2022, Dear Flora,

I understand your hesitation about carrying out our original patient participation plan. Indeed, we should be wary of legitimising a hyped technology and of leaving social and political issues central to AI unaddressed. As you point out, STS researchers have long been sceptical when it comes to imperatives for inclusion and participation. This situation has perhaps resulted in somewhat of a deadlock for participation initiatives, especially when some formats seem to have become an instrument for ‘participation washing,’ i.e. tokenistic box-ticking exercises to licence innovation, without any meaningful engagement with patients. Over the past years, I’ve seen a number of scholars attempt to improve the situation by drafting critical recommendations for participation practices, including bullet point lists on how (not) to do engagement activities. Such prescriptive guidance can guide future research, but I still think there is a lack of empirical research in and on actual engagement practices with AI in medicine that take these recommendations into account. How can we know what patients or citizens know about AI, never mind how they live and experience it, in their healthcare or any other aspect of their lives? I’m reminded of an important question posed by Helen Kennedy (Citation2018, 27):‘How is datafication lived, felt and experienced by non-expert citizens before they start to develop the conditions or consider the possibility of activism in relation to data?’ Her question makes me wonder: what might we be risking, at this stage, when we don’t ask patients about AI in their care?

A decades-long build-up of critiques of participation has fuelled scepticism whenever new calls for patient engagement emerge. While professional literature by participation actors is characterised by a wealth of ‘how to’ guides, participation formats and typologies (Beresford Citation2019), critical literature in the field of participation responds to these proposals by offering cautious and prescriptive guidelines to design more critically reflexive and justice-oriented forms of involvement (e.g. Beier, Schweda, and Schicktanz Citation2019; Fiske, Prainsack, and Buyx Citation2019). Such list-based proposals for future, improved engagement efforts aim to go beyond a negative or dismissive critique of engagement and offer outlines for constructive improvements, such as considering the barriers and burdens of participation for marginalised groups as well as maintaining transparency regarding the purpose of participation in order to prevent participants from developing inaccurate expectations. Some authors take these proposals one step further by calling for new critical evaluations that both deconstruct ‘the making of participation and participatory realities’ and reconstruct (i.e. foster and create) forms of participation that are ‘more deliberately reflexive, ecological and responsible in disposition and intent’ (Chilvers and Kearnes Citation2020, 363). Researchers should not stop at critiquing participation but generate new forms and relations, ‘critical RI should strive for a responsible engagement with participation itself’ (Nielsen and Boenink Citation2020, 2).

However, moving beyond dismissive critique of participation to a more constitutive and performative evaluation and practice is not straightforward. Irwin, Jensen, and Jones (Citation2013) observe how critical research on public engagement is pervaded with a ‘sense of impasse’: STS scholarship reinforces a familiar pattern of critical assessment of the (il)legitimacy of participation, and this critique discourages researchers from even attempting to engage in participation practices. Instead, Irwin et al. suggest that an ‘STS of critique’ should empirically study dynamics and varieties of critical encounters between sociologists and professional engagement actors to evaluate forms of involvement in the making that are not perfect but nevertheless ‘good for thinking about’ (Irwin, Jensen, and Jones Citation2013).

In the past decade, RI research, as well as related literature in STS and SHI (sociology of health and illness), has examined the possibility of a generative critique of participation work through creating while simultaneously studying the nuances and complexities of engagement in practice (Abma Citation2019; Nielsen and Boenink Citation2020; Nielsen and Langstrup Citation2018). Scholars should not stop at the ‘unmasking’ of the participatory turn as ‘inauthentic’, but should attend to multiple and sometimes contradictory effects of participation and care in action (Siffels, Sharon, and Hoffman Citation2021), including for example, the situated ‘participatory tactics’ patients develop when they are invited to participate (Nielsen and Langstrup Citation2018). RI researchers have called for a particular ‘recursive reflexivity’ to carry out ‘critical participation’ research. In other words, RI researchers should address how they

themselves are constituted in the practices of engaging RI; the ways in which they are not only teaching but also learning from the scientists, policymakers, publics, and other stakeholders engaged in a particular setting; and finally, what power, resource or other asymmetries may characterise the contexts within they seek to intervene. (Conley and York Citation2020)

Taking this more reflexive approach to participation also means researchers need to rethink established evaluation frameworks and vocabularies of ‘impact’ and ‘effectiveness’ of participation and to address under-researched aspects. These include the possible negative consequences of involvement research as well as the specific power relations and frictions conjured by participation formats and the RI researchers who execute them (Macq, Parotte, and Delvenne Citation2021; Russell, Fudge, and Greenhalgh Citation2020).

Taken together, the above approaches suggest it is possible to move beyond the ‘impasse’ created by important critiques of participation practices by means of reflexive approaches in responsible innovation research in which critique and creation of participation are intertwined. Similar to STS approaches to ‘situated intervention’ and ‘making and doing’ (Downey and Zuiderent-Jerak Citation2016; Zuiderent-Jerak Citation2015), in RI research about patient participation, researchers should implicate themselves in order to generate critical evaluations that help to understand and improve participatory practices. Importantly, this approach carves out a new space for critique. A critical position is no longer relegated to the ‘distanced’ scholar (understood as more objective and rigorous), nor to the activist-oriented ‘engaged’ researcher (understood as more morally sensitive and contributing to positive societal change). Rather, by intervening into ‘particular, concrete circumstances’ researchers can generate new forms of knowledge that do not aim to realise a predefined critical agenda without reflecting on their own normativity (Zuiderent-Jerak Citation2015, 23). This is to avoid a ‘critique from nowhere’ (Conley and York Citation2020, S9) and to eschew the tendency of what Zuiderent-Jerak calls ‘critical STS’ to ‘first re-instantiate and then critique the usual suspects, rather than empirically unpacking, complexifying and re-situating normativities’ (Zuiderent-Jerak Citation2015, 26–27). In the next section, we propose another option: to take hesitation further, and to take seriously the calls for ‘critical refusal’.

Refusal as alternative to critique

17 March 2023, Dear Sally,

I hear you asking: why don’t we just get started? Why don’t we attempt, as we pledged in our grant proposal, to interview ten patients regarding their concerns about AI in healthcare? But even if we manage to find a suitable case study of an AI application in medical imaging that is beyond the R&D-phase (so as to investigate ‘particular, concrete circumstances’), I have some fundamental concerns. My greatest fear is that the participatory responsible engagement project we are developing might serve as a diversion or distraction from more crucial issues. Even if we demonstrate reflexivity about our positionality in the process of intervening, it doesn’t erase the fact that our project compels us to explore participation in medical AI with a set of problematic pre-established problem definitions. I’m concerned, for instance, that our findings will contribute to a narrow focus on the individual patient who needs to become more informed about digital health developments and who needs to be more ‘AI literate’. I’m also worried that asking patients about AI in healthcare might shift the focus away from inquiring with patients about other aspects of their care and how it is organised. We might end up reinforcing a form of ‘AI-exceptionalism’ – the idea that AI conjures entirely new possibilities and problems – rather than viewing it as part of a much older debate on inequalities in healthcare, and the role of technologies in reproducing these. Funding structures and priorities inevitably steer us towards prioritising questions that are useful to policy makers and innovation managers. This leaves me feeling uncomfortably complicit in the harmful hype surrounding AI, even though I recognise that a new influx of funding for AI-related issues is what has allowed me to work as a salaried academic researcher in the first place. I wonder if this feeling of unease could lead to a different type of action or scholarly stance, one that isn’t entirely paralysing, but instead generates an alternative and critical approach to engaging with patients and AI while simultaneously refusing to conform to the implicit and explicit expectations of our funders and the innovation partners involved in our project. What would happen if we just said ‘no’?

Reflexive analysing-while-doing participation, or critical participation, as proposed in the previous section, is not the only possible response to the ubiquitous critique of participation. A different vocabulary, strategy and position vis-à-vis engagement is suggested by recent sociological and anthropological scholarship in the field of data-intensive (health) research and AI, which proposes ‘refusal’ or ‘critical refusal’ as an important mode and practice of critical scholarship (Barabas Citation2022; Benjamin Citation2016; Cifor et al. Citation2019; D’Ignazio and Klein Citation2020; Garcia et al. Citation2022; Hoffman Citation2021). Refusal does not denote a singular strategy of ‘saying no’ but rather circumscribes a number of related critical approaches that turn refusal into a ‘central tenet’ or ‘analytical tool’ (Garcia et al. Citation2022). Refusal can range from ‘mundane ways of saying no’ to ‘extended practices of litigation and mobilizations’ to ban a technology, as Claudia Aradau and Tobias Blanke point out in their investigations of different scenes of people ‘reversing, rebuffing, refuting, and rejecting’ the use of facial recognition technologies (Aradau and Blanke Citation2022).

Attention to refusal originates from research into ‘ethnographic refusal’ by scholars in political anthropology and indigenous studies (McGranahan Citation2016; Simpson Citation2007; Tuck and Yang Citation2014). Studying moments of refusal by participants and informants may urge researchers to avoid solidified and simplified ideas about the sovereignty and positionality of subjects in the investigation (Simpson Citation2007, 74). Moreover, moments of refusal also push researchers, many of whom identify as feminists, to reconsider their own positionalities, as well possibilities of – and responsibilities in – refusing. As such, the ‘tenet’ of refusal not only directs scholarly attention to scenes of informants’ dismissal, but can also be exercised as a deliberate act by researchers. Ruha Benjamin (Citation2016) gives the example of her own ‘epistemological refusal’ to represent her informants as ‘problem people’ who are ‘distrustful’ towards authorities and researchers and her decision to shift the focus of her research instead to initiatives and institutions perceived as trustworthy. Taking this route of epistemic refusal in AI-related research with and about patient informants, means researchers should refrain, for example, from reifying simplified representations of ‘AI-anxious’ patients who fear a terminator-like creature will take over healthcare (see Garvey Citation2018 on this deceptive ‘terminator syndrome’). Additionally, in some situations, researchers can and must exercise a form of what Benjamin calls ‘second hand refusal’ against harmful practices in place of marginalised informants who cannot refuse. However, as Benjamin points out, this possibility is conditional and relates to privilege, ‘refusing the terms set by those who exercise authority in a given context is only the first (and at times privileged) gesture in a longer chain of agency that not everyone can access’ (Benjamin Citation2016, 5). Anna Lauren Hoffman (Citation2021, 3) reminds us that ‘refusals should up new paths, not retread those paths that ultimately risk exposing those we seek to study.' This is a timely warning of the dangers of including vulnerable others in data ethics projects.

Within the domain of critical data and AI scholarship, the tenet of refusal has enabled researchers to focus on moments when actors reject, withdraw from, or fail to consent with current systems of dataveillance and harmful data practices in ways that not only reveal asymmetrical power relations but also generate new identities and affiliations. Seen in this way, refusals, as anthropologist Carole McGranahan (Citation2016) emphasises, can be generative, social and affiliative: by refusing, a reconfiguration of existing relations or even a new community of ‘refusers’ can emerge. In some examples of refusal-oriented projects, researchers have taken an active and collaborative role in ‘saying no’ to instances of being subjected to data-centred regimes, working together with citizens and activists.Footnote5 In critical data studies projects, ‘refusal’ may function as a generative starting point, not the endpoint, of a process of building something new (a group, a vision, an infrastructure).Footnote6

Our own hesitation in conducting participation research, as initially sketched in the grant proposal, is certainly nowhere near doing the actual hard work of building refusal. We have challenged, on paper and in dialogue, some ways of going along with a participatory imperative which, as we have pointed out, is particularly important in current times of increased awareness about potential harmful consequences of AI. We have not (yetFootnote7) studied patients refusing forms of medical AI in ‘particular, concrete circumstances,’ nor have we formed alliances with patients to engage in the possibility of refusing or resisting. One might say that mobilising refusal in concrete situations is antithetical to the ‘upstream’, i.e. early, anticipatory and future-oriented, dimension of engagement practices with AI applications that do not (yet) exist. In most cases, AI-supported applications in prognosis, triage, diagnosis and health management have yet to become a ‘matter of concern’ for uninvited groups of stakeholders. However, we should be wary of this typification of ‘upstream’ and ‘future-oriented’ in the case of AI in health care. By singling out envisioned AI-applications as the main object of engagement practices, we run the risk of amplifying ‘AI exceptionalism’, and of severing present issues from earlier and connected histories of ‘refusal’ by patients with big-data related developments in healthcare.

Following a tenet of refusal in engagement with AI in healthcare should encourage researchers to connect present-day reactions to AI with previous actors and actions in challenging computational, technological, and data-driven developments in medicine. ‘Data activism’ is relatively rare in the context of healthcare (Hoeyer and Langstrup Citation2021), nonetheless there is a long history of patient-specific involvement in medicine as it connects to issues of (big) data-related governance, transparency, privacy and justice. Examples include engagement with issues of data sovereignty, data protection and ‘opting-out’ of medical data- and sample- sharing, including electronic health records (Benjamin Citation2016; Hoeyer and Langstrup Citation2021; Panofsky Citation2011; Vezyridis and Timmons Citation2019); (rare) disease advocacy; standards, statistical analysis and algorithms, such as computations for donor allocation (Braun et al. Citation2021; Epstein Citation2007; Robinson Citation2022; Wehling Citation2011); and data infrastructures and risk assessment calculations (Klawiter Citation2008; Lehtiniemi and Ruckenstein Citation2019). These existing strands of refusal are integrally connected to new developments in AI and can help scholars to realise that examining ‘patient attitudes’ towards AI should not be regarded as a flashy new research subject but an issue that has longer roots in participatory structures and patient activism.

Following ‘refusing’ actors helps to guide researchers’ focus (including our own) to patient communities that have long been affected by data-driven surveillance tools and are particularly vulnerable to new enthusiasm about AI-powered technologies (cf. Birhane Citation2021). For example, in 2023, at the same time that we were hesitating about interviewing patients, a number of Dutch civil society organisations and patient organisations in mental healthcare sued the Dutch Care Authority (Nationale Zorgautoriteit) for unlawful data gathering and sharing, which policy makers argue is necessary to develop algorithm-steered patient profiling according to care demands in order to improve allocation of healthcare funding (“Vertrouwen in de GGZ” Citation2023). Shifting scholarly attention from ‘patient attitudes research’ to such cases of refusing to go along with a big health data imperative could be an important move for RI researchers who want to take seriously both critique of (patient) participation and of AI, and find ways of moving forward.

Refusal may not always be an option, and it may sometimes have negative consequences for both researchers and, in our case, patients. These range from damaged relationships with research collaborators and funders, and more importantly, indirect silencing of under-represented voices. Especially for RI researchers and social scientists and humanities scholars more broadly, our role is often to find ways to give voice to those who are rarely invited to the table when decisions are being made about the funding and use of technoscience in healthcare, or elsewhere. Patient engagement and participation can be productive, and we highlighted some possible directions in section 3. But, as in the previous paragraph, we also need to consider how refusal and hesitation may redirect attention to other stakeholders and issues. The next and final section suggests how this could be further developed.

Concluding with a list

30 July 2023, Dear Flora,

So far we’ve been circling around the issue of how to carry out our initial promise to ask patients about their perspectives on AI in the field of medical imaging. I think we’ve managed to demonstrate that we are aware of the debates and critiques of the ‘participation imperative’, and of the critiques of the critiques. We understand the possibility of being reflexive when trying to deal with the voices of the less powerful in understanding the implications of emerging technologies, AI in our case. Our hesitation, as we have positioned it, can be viewed as a form of refusal (as researchers) to carry out conventional attitudinal research whether with surveys, interviews or focus groups. But we do want to use ‘refusal’ and ‘hesitations’ as starting points to open up alternative routes for engaging with patients and patient groups.

What next, and how do we conclude? Even though we’ve seen (and critiqued) the endless checklists for ensuring ‘ethical AI’ and all its variants, what if we end with something similar? We could take inspiration from Jennifer Pierre and her colleagues who underline the importance of ‘“organizing ourselves first” before we think about intervening in community-based participatory design and as we engage in these processes’ (Pierre et al. Citation2021, 9). Let’s see if we can end by proposing a tentative list for turning a ‘tenet of refusal’ into action. We really don’t want to simply ‘refuse’, because I think both of us remain committed to finding ways of engaging with those affected. We need to come up with methods, not least for engaging with those groups not usually invited to participation events, but who are nonetheless very well informed and may have different visions of how AI could and should be used in healthcare.

Refusal as action and method

  1. Refuse to participate in AI hype and AI exceptionalism. In other words, do not present AI problems as new problems. Acknowledge and actively shift the research agenda to already existing forms of patient involvement with automated systems and computation in medicine.

  2. Refuse to prioritise forms of ‘invited participation’ with AI. Prioritise instead engagement with stakeholders embedded in enduring controversies and issues.

  3. Refuse to centre AI in patient engagement practices. Shift attention to patients’ engagement with care and healthcare practices, including issues of emancipation and representation regarding new computational infrastructures and practices.

  4. Refuse to begin any form of patient participation research without first reflecting on the research rationale. Ask to what ends forms of (invited) participation are examined and/or created.

  5. Refuse to neglect potentially harmful consequences of doing participation research. Remember to inquire about patients’ opinions on the reliability of certain risk calculation procedures when patients are still in treatment and potentially vulnerable.

  6. Refuse to work on engagement activities that fail to include societal, economic and environmental issues linked to AI when inquiring about new socio-technological developments with participants. Invest in innovative approaches to present more complex scenarios and descriptions of the implications of AI-supported tools.

  7. Refuse to use clichéd visual and textual depictions of AI that do not reflect the conditions of its production and its harms to the people and planet that produce it (Dihal and Duarte Citation2023).

  8. Refuse to produce and reify simplistic representations of patients as being ‘anxious’, ‘ignorant’ or ‘in awe’ of AI. Dig deeper in order to grapple with patients’ reactions to AI when responses are ambiguous or equivocal (e.g. ‘I don’t know’).

  9. Refuse to regard ‘saying no’ to particular strands of participation research as unhelpful naysaying. Recognise that refusal and hesitation can also be generative forms of critique, method and action.

Acknowledgements

We are grateful to Mareike Smolka, Tess Doezema and Lucien van Schomberg, the guest editors for this special issue, for their patience in waiting for our first draft, and for their helpful feedback. We would also like to thank the anonymous reviewers and Karin Jongsma for their constructive suggestions. Our project partners, Annelien Bredenoord, Jojanneke Drogt, Karin Jongsma, Megan Milota and Shoko Vos, have provided valuable support and advice throughout the project (funded by the Dutch Research Council, grant no. 406.DI.19.089). All mistakes, errors and clumsy formulations remain our responsibility.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Dutch Research Council [Nederlandse Organisatie voor Wetenschappelijk Onderzoek] under grant number 406.DI.19.089.

Notes

1 As we explain at the end of the Introduction, we introduce each section with an email between the two authors. One reviewer, though appreciating the form, found it hard to believe that our email exchanges are so elaborate. We have edited our emails to improve readability (and included full references), but our emails are indeed quite elaborate, partly because we started this project during the COVID-19 pandemic when email was a major form of communication. We also struggled to find a form for this article, and exchanging elaborate emails was a way for us to attempt to find a solution.

2 The RAIDIO project is funded by Dutch Research Council, 2020–2024, grant 406.DI.19.089. See project website: https://raidioproject.nl.

3 In a recent systematic review on patient and public involvement in digital health, Baines et al. (Citation2022) found 133 terms to describe forms of involvement, with ‘user-centred design’, ‘participatory design’ and ‘codesign’ most commonly used. Miller et al. (Citation2018) also note shifting and context-dependent connotations of terms such as ‘involvement’ and ‘engagement’.

4 Judy Wajcman (Citation2004, 142) also pointed to the limits of involving citizens in allegedly more democratic forms of governance, from the perspective of technofeminism. She pointed to the lack of symmetry: citizens are expected to understand the science, whereas scientists are absolved from making explicit their values, norms and practices. She also points to the limited range of choices presented to citizens, severely constraining their choices.

5 A well-known example is the US-based project Our Data Bodies (2016-present), in which researchers and activists created data literacy tools together with residents of Charlotte, Detroit and Los Angeles, among other cities, to challenge extractive data practices in neighbourhoods and foster data justice (Saba et al. Citation2017; “Our Data Bodies” Citation2018).

6 According to Maya Ganesh and Emanuel Moss, in contrast to refusal, the notion of ‘resistance’ describes a political stance that can perhaps too easily be appropriated by practitioners within Big Tech circles into a technical or procedural fix, i.e. a ‘resistance-architected-into-design’ (Citation2022, 95). ‘Refusal’, in their view, goes beyond proposing a fix to the system and presents a clear ‘no’ to the possibility of repairing current infrastructures of datafication. However, discussions in critical data studies show how resistance and refusal are conceptually linked.

7 In the future, we will consider working together with Dutch civil society and patient organisations currently engaged in debate with the Dutch government about the extent and legality of data gathering and profiling.

References

  • Abma, Tineke A. 2019. “Dialogue and Deliberation: New Approaches to Including Patients in Setting Health and Healthcare Research Agendas.” Action Research 17 (4): 429–450. https://doi.org/10.1177/1476750318757850.
  • Academy of Medical Sciences. 2019. Artificial Intelligence and Health. Summary Report of a Roundtable Held on 16 January 2019. London: Academy of Medical Sciences. https://acmedsci.ac.uk/file-download/77652269.
  • Aggarwal, Ravi, Soma Farag, Guy Martin, Hutan Ashrafian, and Ara Darzi. 2021. “Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey.” Journal of Medical Internet Research 23 (8): e26162. https://doi.org/10.2196/26162.
  • “AI, Algorithmic and Automation Incidents and Controversies Repository.” 2022. AI, Algorithmic and Automation Incidents and Controversies. https://www.aiaaic.org/.
  • AI Index Steering Committee, Maslej, Nestor, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, et al. 2023. The AI Index 2023. Stanford: Institute for Human-Centered AI, Stanford University. https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf.
  • Aradau, Claudia, and Tobias Blanke. 2022. Algorithmic Reason: The New Government of Self and Other. New York: Oxford University Press.
  • Aradau, Claudia, and Mercedes Bunz. 2022. “Dismantling the Apparatus of Domination?: Left Critiques of AI.” Radical Philosophy 212:10–18.
  • Ayobi, Amid, Jacob Hughes, Christopher J Duckworth, Jakub J Dylag, Sam James, Paul Marshall, Matthew Guy, et al. 2023. “Computational Notebooks as Co-design Tools: Engaging Young Adults Living with Diabetes, Family Carers, and Clinicians with Machine Learning Models.” In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ‘23, 1–20. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3544548.3581424.
  • Baines, Rebecca, Hannah Bradwell, Katie Edwards, Sebastian Stevens, Samantha Prime, John Tredinnick-Rowe, Miles Sibley, and Arunangsu Chatterjee. 2022. “Meaningful Patient and Public Involvement in Digital Health Innovation, Implementation and Evaluation: A Systematic Review.” Health Expectations 25:1–14. https://doi.org/10.1111/hex.13506.
  • Balmer, Andrew S., Jane Calvert, Claire Marris, Susan Molyneux-Hodgson, Emma Frow, Matthew Kearnes, Kate Bulpin, Pablo Schyfter, Adrian Mackenzie, and Paul Martin. 2016. “Five Rules of Thumb for Post-ELSI Interdisciplinary Collaborations.” Journal of Responsible Innovation 3 (1): 73–80. https://doi.org/10.1080/23299460.2016.1177867.
  • Barabas, Chelsea. 2022. Refusal in Data Ethics: Re-imagining the Code Beneath the Code of Computation in the Carceral State. SSRN Scholarly Paper. Rochester, NY. https://doi.org/10.2139/ssrn.4094977.
  • Beier, Katharina, Mark Schweda, and Silke Schicktanz. 2019. “Taking Patient Involvement Seriously: A Critical Ethical Analysis of Participatory Approaches in Data-Intensive Medical Research.” BMC Medical Informatics and Decision Making 19 (1): 90. https://doi.org/10.1186/s12911-019-0799-7.
  • Benjamin, Ruha. 2016. “Informed Refusal: Toward a Justice-Based Bioethics.” Science, Technology, & Human Values 41 (6): 967–990. https://doi.org/10.1177/0162243916656059.
  • Beresford, Peter. 2019. “Public Participation in Health and Social Care: Exploring the Co-Production of Knowledge.” Frontiers in Sociology 3. https://www.frontiersin.org/articles/10.3389fsoc.2018.00041.
  • Bijker, Wiebe. 2003. “The Need for Public Intellectuals: A Space for STS.” Science, Technology, & Human Values 28 (4): 443–450.
  • Birhane, Abeba. 2021. “Algorithmic Injustice: A Relational Ethics Approach.” Patterns 2 (2): 100205. https://doi.org/10.1016/j.patter.2021.100205.
  • Bogner, Alexander. 2012. “The Paradox of Participation Experiments.” Science, Technology, & Human Values 37 (5): 506–527. https://doi.org/10.1177/0162243911430398
  • Braun, Lundy, Anna Wentz, Reuben Baker, Ellen Richardson, and Jennifer Tsai. 2021. “Racialized Algorithms for Kidney Function: Erasing Social Experience.” Social Science & Medicine 268 (January): 113548. https://doi.org/10.1016/j.socscimed.2020.113548.
  • Castro, Eva Marie, Tine Van Regenmortel, Kris Vanhaecht, Walter Sermeus, and Ann Van Hecke. 2016. “Patient Empowerment, Patient Participation and Patient-centeredness in Hospital Care: A Concept Analysis Based on a Literature Review.” Patient Education and Counseling 99 (12): 1923–1939. https://doi.org/10.1016/j.pec.2016.07.026.
  • Chilvers, Jason, and Matthew Kearnes. 2016. Remaking Participation: Science, Environment and Emergent Publics. London & New York: Routledge.
  • Chilvers, Jason, and Matthew Kearnes. 2020. “Remaking Participation in Science and Democracy.” Science, Technology, & Human Values 45 (3): 347–380. https://doi.org/10.1177/0162243919850885.
  • Cifor, M., P. Garcia, T. L. Cowan, J. Rault, T. Sutherland, A. Chan, J. Rode, A. L. Hoffmann, N. Salehi, and Lisa Nakamura. 2019. “Feminist Data Manifest-No.” https://www.manifestno.com/.
  • Coad, Alex, Paul Nightingale, Jack Stilgoe, and Antonio Vezzani. 2021. “Editorial: The Dark Side of Innovation.” Industry and Innovation 28 (1): 102–112. https://doi.org/10.1080/13662716.2020.1818555.
  • Conley, Shannon N., and Emily York. 2020. “Public Engagement in Contested Political Contexts: Reflections on the Role of Recursive Reflexivity in Responsible Innovation.” Journal of Responsible Innovation 7 (sup1): 1–12. https://doi.org/10.1080/23299460.2020.1848335.
  • Cowan, Hannah, Charlotte Kühlbrandt, and Hana Riazuddin. 2022. “Reordering the Machinery of Participation with Young People.” Sociology of Health & Illness 44 (S1): 90–105. https://doi.org/10.1111/1467-9566.13426.
  • Del Savio, Lorenzo, Alena Buyx, and Barbara Prainsack. 2016. Opening the Black Box of Participation in Medicine and Healthcare. Vienna: Institute of Technology Assessment.
  • Delgado, Ana, Kamilla Lein Kjølberg, and Fern Wickson. 2011. “Public Engagement Coming of Age: From Theory to Practice in STS Encounters with Nanotechnology.” Public Understanding of Science 20 (6): 826–845. https://doi.org/10.1177/0963662510363054.
  • D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. Cambridge: The MIT Press.
  • Dihal, Kanta, and Tania. Duarte. 2023. Better Images of AI: A Guide for Users and Creators. Cambridge and London: The Leverhulme Centre for the Future of Intelligence and We and AI.
  • Donia, Joseph, and James A. Shaw. 2021. “Co-design and Ethical Artificial Intelligence for Health: An Agenda for Critical Research and Practice.” Big Data & Society 8 (2): 20539517211065250. https://doi.org/10.1177/20539517211065248.
  • Downey, Gary Lee, and Teun Zuiderent-Jerak. 2016. “Making and Doing: Engagement and Reflexive Learning in STS.” In The Handbook of Science and Technology Studies, edited by Ulrike Felt, Rayvon Fouché, Clark A. Miller, and Laurel Smith-Doerr, 223–251. Cambridge, MA: The MIT Press.
  • Drogt, Jojanneke, Megan Milota, Shoko Vos, Annelien Bredenoord, and Karin. Jongsma. 2022. “Integrating Artificial Intelligence in Pathology: A Qualitative Interview Study of Users’ Experiences and Expectations.” Modern Pathology 35 (11): 1540–1550. https://doi.org/10.1038/s41379-022-01123-6.
  • Epstein, Steven. 2007. “Patient Groups and Health Movements.” In The Handbook of Science and Technology Studies, Third Edition, edited by Edward J. Hackett, Olga Amsterdamska, Michael Lynch, and Judy Wajcman, 499–539. Cambridge, MA: The MIT Press.
  • Eubanks, Virginia. 2018. Automating Inequality: How High-tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.
  • European Patients Forum. 2020. “European Patients Forum’s Response & Accompanying Statement. Public Consultation on the White Paper on Artificial Intelligence.” https://www.eu-patient.eu/globalassets/documents/1.-ai-white-paper_consultation-response_epf_statement-final.pdf.
  • European Patients Forum. 2022. “Artificial Intelligence in Healthcare from a Patient’s Perspective.” https://www.eu-patient.eu/globalassets/documents/1.-ai-white-paper_consultation-response_epf_statement-final.pdf.
  • Felt, Ulrike, and Maximilian Fochler. 2008. “The Bottom-up Meanings of the Concept of Public Participation in Science and Technology.” Science and Public Policy 35 (7): 489–499. https://doi.org/10.3152/030234208X329086.
  • Fiske, Amelia, Barbara Prainsack, and Alena Buyx. 2019. “Meeting the Needs of Underserved Populations: Setting the Agenda for More Inclusive Citizen Science of Medicine.” Journal of Medical Ethics 45 (9): 617–622. https://doi.org/10.1136/medethics-2018-105253.
  • Fritsch, Sebastian J., Andrea Blankenheim, Alina Wahl, Petra Hetfeld, Oliver Maassen, Saskia Deffge, Julian Kunze, et al. 2022. “Attitudes and Perception of Artificial Intelligence in Healthcare: A Cross-sectional Survey among Patients.” Digital Health 8 (August): 20552076221116772. https://doi.org/10.1177/20552076221116772.
  • Ganesh, Maya Indira, and Emanuel Moss. 2022. “Resistance and Refusal to Algorithmic Harms: Varieties of ‘Knowledge Projects’.” Media International Australia 183 (1): 90–106. https://doi.org/10.1177/1329878X221076288.
  • Garcia, Patricia, Tonia Sutherland, Niloufar Salehi, Marika Cifor, and Anubha Singh. 2022. “No! Re-imagining Data Practices Through the Lens of Critical Refusal.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW2): 315:1–315:20. https://doi.org/10.1145/3557997.
  • Garvey, Colin. 2018. “Testing the 'Terminator Syndrome': Sentiment of AI News Coverage and Perceptions of AI Risk.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3310907.
  • Haraway, Donna. 2016. Staying with the Trouble: Making Kin in the Chthulucene. Durham, NC: Duke University Press.
  • High-Level Expert Group on AI. 2019. Ethics Guidelines for Trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
  • Hoeyer, Klaus, and Henriette Langstrup. 2021. “Datafying the Patient Voice: The Making of Pervasive Infrastructures as Processes of Promise, Ruination, and Repair.” In Healthcare Activism: Markets, Morals, and the Collective Good, edited by Susi Geiger. 0xford: Oxford University Press. https://doi.org/10.1093/oso/9780198865223.003.0005.
  • Hoffman, Anna Lauren. 2021. “Even When You Are a Solution You Are a Problem: An Uncomfortable Reflection on Feminist Data Ethics.” Global Perspectives 2 (1). https://doi.org/10.1525/gp.2021.21335.
  • Hollin, Gregory, and Ros Williams. 2022. “Complicity: Methodologies of Power, Politics and the Ethics of Knowledge Production.” Sociology of Health & Illness 44 (S1): 1–21. https://doi.org/10.1111/1467-9566.13575.
  • Irwin, Alan, Torben Elgaard Jensen, and Kevin E Jones. 2013. “The Good, the Bad and the Perfect: Criticizing Engagement Practice.” Social Studies of Science 43 (1): 118–135. https://doi.org/10.1177/0306312712462461.
  • Katzenbach, Christian, Donato Ricci, Noortje Marres, Michael Castelle, Jonathan Roberge, and Fenwick McKelvey. 2023. Shifting AI Controversies. Statement and Call for Contributions to the Final Conference of the International Project “Shaping AI”. https://shapingai.org.
  • Katzenbach, Christian, and Jascha Bareis. 2022. “Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics.” Science, Technology, and Human Values 47 (5): 855–881. https://doi.org/10.1177/01622439211030007.
  • Kennedy, H. 2018. “Living with Data: Aligning Data Studies and Data Activism Through a Focus on Everyday Experiences of Datafication.” Krisis: Journal for Contemporary Philosophy 2018 (1): 18–30.
  • Kind, Carly. 2020. “The Term ‘Ethical AI’ is Finally Starting to Mean Something.” VentureBeat (blog), August 23, 2020. https://venturebeat.com/2020/08/23/the-term-ethical-ai-is-finally-starting-to-mean-something/.
  • Klawiter, Maren. 2008. The Biopolitics of Breast Cancer: Changing Cultures of Disease and Activism. Minneapolis: U of Minnesota Press.
  • Komporozos-Athanasiou, Aris, Nina Fudge, Mary Adams, and Christopher McKevitt. 2016. “Citizen Participation as Political Ritual: Towards a Sociological Theorizing of ‘Health Citizenship.’” Sociology, August. https://doi.org/10.1177/0038038516664683.
  • Latour, Bruno. 1996. Aramis, or the Love of Technology. Cambridge, MA: Harvard University Press.
  • Lehtiniemi, Tuukka, and Minna Ruckenstein. 2019. “The Social Imaginaries of Data Activism.” Big Data & Society 6 (1): 2053951718821146. https://doi.org/10.1177/2053951718821146.
  • Macq, Hadrien, Céline Parotte, and Pierre Delvenne. 2021. “Exploring Frictions of Participatory Innovation between Sites and Scales.” Science as Culture 30 (2): 161–171. https://doi.org/10.1080/09505431.2021.1910230.
  • Macq, Hadrien, Élise Tancoigne, and Bruno J. Strasser. 2020. “From Deliberation to Production: Public Participation in Science and Technology Policies of the European Commission (1998–2019).” Minerva 58 (4): 489–512. https://doi.org/10.1007/s11024-020-09405-6.
  • Madden, Mary, and Ewen Speed. 2017. “Beware Zombies and Unicorns: Toward Critical Patient and Public Involvement in Health Research in a Neoliberal Context.” Frontiers in Sociology 2. https://www.frontiersin.org/articles/10.3389fsoc.2017.00007.
  • Maguire, James, Laura Watts, and Britt Ross Winthereik. 2021. Energy Worlds. Manchester: Mattering Press.
  • Marris, Claire, and Jane Calvert. 2020. “Science and Technology Studies in Policy: The UK Synthetic Biology Roadmap.” Science, Technology, & Human Values 45 (1): 34–61. https://doi.org/10.1177/0162243919828107.
  • Martin, Paul A. 2022. “The Challenge of Institutionalised Complicity: Researching the Pharmaceutical Industry in the Era of Impact and Engagement.” Sociology of Health & Illness 44 (S1): 158–178. https://doi.org/10.1111/1467-9566.13536.
  • McCradden, Melissa D., Ami Baba, Ashirbani Saha, Sidra Ahmad, Kanwar Boparai, Pantea Fadaiefard, and Michael D. Cusimano. 2020. “Ethical Concerns around Use of Artificial Intelligence in Health Care Research from the Perspective of Patients with Meningioma, Caregivers and Health Care Providers: A Qualitative Study.” Canadian Medical Association Open Access Journal 8 (1): E90–E95. https://doi.org/10.9778/cmajo.20190151.
  • McCradden, Melissa D., Tasmie Sarker, and P. Alison Paprica. 2020. “Conditionally Positive: A Qualitative Study of Public Perceptions about Using Health Data for Artificial Intelligence Research.” MedRxiv (May): 2020.04.25.20079814. https://doi.org/10.1101/2020.04.25.20079814.
  • McGranahan, Carole. 2016. “Theorizing Refusal: An Introduction.” Cultural Anthropology 31 (3): 319–325. https://doi.org/10.14506/ca31.3.01.
  • McQuillan, Dan. 2022. Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. Bristol: Bristol University Press.
  • Miller, Fiona Alice, Sarah J. Patton, Mark Dobrow, and Whitney Berta. 2018. “Public Involvement in Health Research Systems: A Governance Framework.” Health Research Policy and Systems 16 (1): 79. https://doi.org/10.1186/s12961-018-0352-7.
  • Nielsen, Karen Dam, and Marianne Boenink. 2020. “Subtle Voices, Distant Futures: A Critical Look at Conditions for Patient Involvement in Alzheimer’s Biomarker Research and beyond.” Journal of Responsible Innovation 7 (2): 170–192. https://doi.org/10.1080/23299460.2019.1676687.
  • Nielsen, Karen Dam, and Henriette Langstrup. 2018. “Tactics of Material Participation: How Patients Shape their Engagement through e-Health.” Social Studies of Science 48 (2): 259–282. https://doi.org/10.1177/0306312718769156.
  • Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366 (6464): 447–453. https://doi.org/10.1126/science.aax2342.
  • “Our Data Bodies.” 2018. Our Data Bodies. https://www.odbproject.org.
  • Panofsky, Aaron. 2011. “Generating Sociability to Drive Science: Patient Advocacy Organizations and Genetics Research.” Social Studies of Science 41 (1): 31–57. https://doi.org/10.1177/0306312710385852.
  • Pierre, Jennifer, Roderic Crooks, Morgan Currie, Britt Paris, and Irene Pasquetto. 2021. “Getting Ourselves Together: Data-centered Participatory Design Research & Epistemic Burden.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ‘21, 1–11. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3411764.3445103.
  • Prainsack, Barbara. 2017. Personalized Medicine: Empowered Patients in the 21st Century? 1st ed. New York: NYU Press.
  • Richardson, Jordan P., Cambray Smith, Susan Curtis, Sara Watson, Xuan Zhu, Barbara Barry, and Richard R. Sharp. 2021. “Patient Apprehensions about the Use of Artificial Intelligence in Healthcare.” Npj Digital Medicine 4 (1): 1–6. https://doi.org/10.1038/s41746-020-00373-5.
  • Robinson, David G. 2022. Voices in the Code: A Story about People, Their Values, and the Algorithm They Made. New York: Russell Sage Foundation.
  • Russell, Jill, Nina Fudge, and Trish Greenhalgh. 2020. “The Impact of Public Involvement in Health Research: What are We Measuring? Why are We Measuring it? Should We Stop Measuring it?” Research Involvement and Engagement 6 (1): 63. https://doi.org/10.1186/s40900-020-00239-w.
  • Saba, Mariella, Tamika Lewis, Tawana Petty, Seeta Peña Gangadharan, and Virginia Eubanks. 2017. “From Paranoia to Power. Our Data Bodies Project 2016 Report.” Our Data Bodies. https://www.odbproject.org/wp-content/uploads/2016/12/ODB-Community-Report-7-24.pdf.
  • Seyyed-Kalantari, Laleh, Haoran Zhang, Matthew B. A. McDermott, Irene Y. Chen, and Marzyeh Ghassemi. 2021. “Underdiagnosis Bias of Artificial Intelligence Algorithms Applied to Chest Radiographs in Under-served Patient Populations.” Nature Medicine 27 (12): 2176–2182. https://doi.org/10.1038/s41591-021-01595-0.
  • Shaw, James, and Sharifah Sekalala. 2023. “Health Data Justice: Building New Norms for Health Data Governance.” Npj Digital Medicine 6 (1): 1–4. https://doi.org/10.1038/s41746-022-00734-2.
  • Shoop-Worrall, Stephanie J. W., Katherine Cresswell, Imogen Bolger, Beth Dillon, Kimme L. Hyrich, and Nophar Geifman. 2021. “Nothing about Us without Us: Involving Patient Collaborators for Machine Learning Applications in Rheumatology.” Annals of the Rheumatic Diseases 80 (12): 1505–1510. https://doi.org/10.1136/annrheumdis-2021-220454.
  • Siffels, Lotje E., Tamar Sharon, and Andrew S. Hoffman. 2021. “The Participatory Turn in Health and Medicine: The Rise of the Civic and the Need to ‘Give Back’ in Data-intensive Medical Research.” Humanities and Social Sciences Communications 8 (1): 1–10. https://doi.org/10.1057/s41599-020-00684-8.
  • Simpson, Audra. 2007. “On Ethnographic Refusal: Indigeneity, ‘Voice’ and Colonial Citizenship.” Junctures: The Journal for Thematic Dialogue 9. https://junctures.org/junctures/index.php/junctures/article/view/66.
  • Stawarz, Katarzyna, Dmitri Katz, Amid Ayobi, Paul Marshall, Taku Yamagata, Raul Santos-Rodriguez, Aisling Peter Flach, and Aisling Ann O’Kane. 2023. “Co-designing Opportunities for Human-centred Machine Learning in Supporting Type 1 Diabetes Decision-making.” International Journal of Human-Computer Studies 173 (May): 103003. https://doi.org/10.1016/j.ijhcs.2023.103003.
  • Tuck, Eve, and K. Wayne Yang. 2014. “Unbecoming Claims: Pedagogies of Refusal in Qualitative Research.” Qualitative Inquiry 20 (6): 811–818. https://doi.org/10.1177/1077800414530265.
  • “Vertrouwen in de GGZ.” 2023. https://vertrouwenindeggz.nl/over-vertrouwen-in-de-ggz.
  • Vezyridis, Paraskevas, and Stephen Timmons. 2019. “Resisting Big Data Exploitations in Public Healthcare: Free Riding or Distributive Justice?” Sociology of Health & Illness 41 (8): 1585–1599. https://doi.org/10.1111/1467-9566.12969.
  • Voß, Jan-Peter, and Nina Amelung. 2016. “Innovating Public Participation Methods. Technoscientization and Reflexive Engagement.” Social Studies of Science 46 (5): 749–772. https://doi.org/10.1177/0306312716641350.
  • Wajcman, Judy. 2004. TechnoFeminism. Cambridge, UK: Polity.
  • Wehling, Peter. 2011. “The ‘Technoscientization’ of Medicine and its Limits: Technoscientific Identities, Biosocialities, and Rare Disease Patient Organizations.” Poiesis & Praxis 8 (2–3): 67–82. https://doi.org/10.1007/s10202-011-0100-3.
  • Welsh, Ian, and Brian Wynne. 2013. “Science, Scientism and Imaginaries of Publics in the UK: Passive Objects, Incipient Threats.” Science as Culture 22 (4): 540–566. https://doi.org/10.1080/14636778.2013.764072.
  • Wilson, Christopher. 2022. “Public Engagement and AI: A Values Analysis of National Strategies.” Government Information Quarterly 39 (1): 101652. https://doi.org/10.1016/j.giq.2021.101652.
  • Winter, Peter, and Annamaria Carusi. 2022. “Professional Expectations and Patient Expectations Concerning the Development of Artificial Intelligence (AI) for the Early Diagnosis of Pulmonary Hypertension (PH).” Journal of Responsible Technology 12 (December): 100052. https://doi.org/10.1016/j.jrt.2022.100052.
  • Woolgar, Steve, Else Vogel, David Moats, and Claes-Fredrik Helgesson, eds. 2021. The Imposter as Social Theory: Thinking with Gatecrashers, Cheats and Charlatans. Bristol: Bristol University Press.
  • World Health Organisation. 2021. Ethics and Governance of Artificial Intelligence for Health. WHO Guidance. Geneva: World Health Organisation. https://www.who.int/publications-detail-redirect/9789240029200.
  • Wu, Chenxi, Huiqiong Xu, Dingxi Bai, Xinyu Chen, Jing Gao, and Xiaolian Jiang. 2023. “Public Perceptions on the Application of Artificial Intelligence in Healthcare: A Qualitative Meta-synthesis.” BMJ Open 13 (1): e066322. https://doi.org/10.1136/bmjopen-2022-066322.
  • Young, Albert T, Dominic Amara, Abhishek Bhattacharya, and Maria L Wei. 2021. “Patient and General Public Attitudes towards Clinical Artificial Intelligence: A Mixed Methods Systematic Review.” The Lancet Digital Health 3 (9): e599–e611. https://doi.org/10.1016/S2589-7500(21)00132-1.
  • Zuiderent-Jerak, Teun. 2015. Situated Intervention: Sociological Experiments in Health Care. Cambridge, MA: The MIT Press.