328
Views
0
CrossRef citations to date
0
Altmetric
Research Article

‘We have opened a can of worms’: using collaborative ethnography to advance responsible artificial intelligence innovation

ORCID Icon & ORCID Icon
Article: 2331655 | Received 15 Feb 2023, Accepted 13 Mar 2024, Published online: 23 Apr 2024

ABSTRACT

With the recent rapid developments in artificial intelligence (AI), social scientists and computational scientists have approached overlapping questions about ethics, responsibility, and fairness. Joined-up efforts between these disciplines have nonetheless been scarce due to, among other factors, unfavourable institutional arrangements, unclear publication avenues, and sometimes incompatible normative, epistemological and methodological commitments. In this paper, we offer collaborative ethnography as one concrete methodology to address some of these challenges. We report on an interdisciplinary collaboration between science and technology studies scholars and data scientists developing an AI system to detect online misinformation. The study combined description, interpretation, and (self-)critique throughout the design and development of the AI system. We draw three methodological lessons to move from critique to action for interdisciplinary teams pursuing responsible AI innovation: (1) Collective self-critique as a tool to resist techno-centrism and relativism, (2) Moving from strategic vagueness to co-production, and (3) Using co-authorship as a method.

Introduction

One of the enduring cornerstones of the programmes of responsible innovation (RI) and responsible research and innovation (RRI) has been a focus on promoting inclusive knowledge ‘co-production’ through inter/multidisciplinary collaboration,Footnote1 between the natural sciences and engineering on the one hand and the social sciences and humanities (SSH) on the other (Felt et al. Citation2016; Owen and Pansera Citation2019; Pansera et al. Citation2020). To this end, there have been diverse attempts at integrating social and ethical considerations into processes of research and innovation, sometimes encouraged by research funders and more often spearheaded by social scientists (Owen et al. Citation2021). Of note are collaborative approaches such as constructive and real-time technology assessment (Guston and Sarewitz Citation2002; Rip, Misa, and Schot Citation1995; Schot and Rip Citation1997), socio-technical integration (Fisher et al. Citation2015), midstream modulation of technology (Fisher, Mahajan, and Mitcham Citation2006), collaborative experimentation (Balmer et al. Citation2016; Delgado and Åm Citation2018), situated interventions (Katell et al. Citation2020) and ‘Future Labs’ as spaces for anticipatory and reflexive pedagogy (Conley, Tabas, and York Citation2022).

Our study was conducted within a project funded by the UK Engineering and Physical Sciences Research Council (EPSRC) in a research-intensive university in the UK. RI is now a requirement in many EPSRC grant applications and is often established as a crosscutting area of research within its research centres and centres for PhD training (EPSRC Citation2018; Owen Citation2014; Owen et al. Citation2021; Stilgoe, Owen, and Macnaghten Citation2013). As a result of commitments of this sort, social scientists are increasingly invited into spaces led by researchers in engineering and computer science e.g. as so-called ‘champions of RI’, to conduct training and pedagogy in RI or to deliver RI-related research within work packages in science-led research projects. These arrangements are often new to many researchers (of all disciplines) and funders like EPSRC take a deliberately flexible and non-prescriptive approach when it comes to how to incorporate RI into everyday academic practices.

While calls and pledges for ‘genuine’ and ‘engaged’ collaborations between different academic communities abound (Lyall et al. Citation2013), a persistent open question is how these aspirations can be realised in practice (Bromham, Dinnage, and Hua Citation2016; Lindvig and Hillersdal Citation2019; Pansera et al. Citation2020; Schuijff and Dijkstra Citation2020). Interdisciplinary ventures can be fraught with challenges; they often demand collaborators get out of their comfort zones and spend time understanding and negotiating diverse and unfamiliar jargon, vocabularies, concepts, methods, styles, and procedures (Felt et al. Citation2016). Collaborators are faced both with the day-to-day frictions between unfamiliar academic traditions and practical difficulties when deciding who does what and how to disseminate research outcomes (e.g. where to publish). All too often, interdisciplinary projects are organised in a siloed and transient fashion, with discipline-specific work packages and deliverables that present limited opportunities for lasting joint collaborations and contributions (Felt et al. Citation2016). Social scientists are also aware of the potential for ontological capture, instrumentalisation, and being cast in various – sometimes gendered – roles (e.g. the ‘trophy wife’) (Balmer et al. Citation2015). Whether and how different disciplinary orientations, ontologies and epistemologies seeking to coalesce can effectively do so and pursue more socially beneficial outcomes remains an empirical question worth pursuing.

We address this question while focusing on contemporary debates surrounding the social and ethical implications of artificial intelligence (AI) and the urgent need to advance responsible innovation in the field. In recent years there has been growing concern amongst different academic communities over troublesome issues of discrimination, bias, oppression, exploitation and other harms associated with the growing prevalence of algorithmic systems and data-driven innovation in society. Harmful and unjust outcomes resulting from the use of algorithmic systems have been evidenced in myriad locations such as welfare distribution, insurance valuation, recruitment, migration and mobility, policing and crime recidivism, and the invisible, and often precarious labour that sits behind the technology e.g. for labelling and cleaning datasets (Birhane Citation2021; Brayne Citation2017; Eubanks Citation2017; Noble Citation2018; Rosenblat and Stark Citation2016; Williams, Miceli, and Gebru Citation2022).

Critical scholars in the social sciences and humanities have approached ethical questions about digital technology from various fronts including, among others, moral philosophy, surveillance studies, science and technology studies and critical data/algorithm studies (see e.g. Domínguez Hernández and Galanos Citation2022; Manokha Citation2018; Miceli and Posada Citation2022; Stark, Greene, and Hoffmann Citation2021; Zuboff Citation2019). This growing body of scholarship has contributed significantly to advancing ethical and moral debate in a fast-paced and largely opaque arena of data-intensive innovation.

At the same time, different strands of computer and data sciences have engaged in a vigorous programme of research and scholarship related to issues of fairness, accountability and transparency in algorithmic systems. Much of this work has focused on mitigating the failures and unwanted outcomes of algorithmic systems, usually taking a pragmatic, solution-oriented approach where the desired properties are engineered or designed into a system (e.g. privacy enhancing technologies) or where failures are flagged to practitioners and decision makers (Selbst et al. Citation2019). Examples of the latter include approaches to debiasing ML models and operationalising ethics and fairness in the design of systems through mathematical formalisms or risk scoresFootnote2 (Binns et al. Citation2017; Feldman et al. Citation2015; Geisslinger et al. Citation2021; Kleinberg et al. Citation2018; Ryan et al. Citation2021).

Driven by common ethical concerns between social science and humanities, and data and computational sciences, there has been more interlinking and attempts to collaborate in recent years. Reflective of this is the emphasis on socio-technical framings and interdisciplinarity in computer-science-led conferences such as Fairness, Accountability and Transparency (FAccT) and Artificial Intelligence, Ethics, and Society (AIES).Footnote3 While this is a positive sign, some scholars insist that there remains a wide ‘divide’ or ‘gulf’ separating the computational sciences and critical work in the social sciences, notably science and technology studies (STS), critical data/algorithm studies and surveillance studies, among others (Moats and Seaver Citation2019; Sloane and Moss Citation2019).

Very often researchers venturing into these sorts of interdisciplinary collaborations are faced with tensions and the need to negotiate different, and sometimes conflicting, ontological, epistemological, methodological and institutional norms and commitments (Domínguez Hernández and Galanos Citation2022; Owen et al. Citation2021). Not only that, the way in which critique is approached and mobilised within such interdisciplinary spaces is contested and can have significant political, interpersonal and professional implications with which collaborators need to grapple (Griffin, Hamberg, and Lundgren Citation2013; Wiklund Citation2013). While critique and critical reflection have been implied by frameworks for responsible innovation and AI ethics, the practical challenges and consequences of doing critical work in this arena have been under-researched.

We wish to contribute to the special issue’s theme of ‘Critique in, of, and for Responsible Innovation’, by drawing lessons from a project-centred and experimental responsible AI collaboration at the nexus between critical social sciences and computational sciences. To do so, we propose, and reflect on, collaborative ethnography as a methodology to advance productive modes of critique in such interdisciplinary spaces. We begin by discussing some of the tensions arising from interventionist and critical endeavours within STS and allied fields and how these sensibilities can be mobilised to engender responsible innovation in the field of AI. Next, we describe the setting up of the RI collaboration and the institutional conditions within which it took place. We then discuss the rationale behind the choice of collaborative ethnography as a methodology and how this was configured and deployed within the AI project. We end by drawing lessons for interdisciplinary teams on how collaborative ethnography can offer a generative and productive space for critique, self-critique/reflexivity and action.

The destabilising role of critiquing and intervening in innovation

Critical inquiry on technology and innovation – and particularly the field of STS – has had a long-standing concern with productively influencing the sites it enters, be it by mobilising insights about technoscience and its social entanglements to new audiences, advising science policy, or reconfiguring (intendedly or not) the objects and practices being studied. Over the years, STS and adjacent social science scholars have been invited to collaborate more closely in the making of science and technology, often as members of interdisciplinary projects, research consortia, responsible innovation programmes, and socio-technical interventions, with varying degrees of success (Balmer et al. Citation2015; Fisher et al. Citation2015; Jensen Citation2012; Owen et al. Citation2021). It is not rare today to find concepts and methods from STS being productively deployed in allied academic and practice-oriented communities across STEM, design and human–computer interaction (Costanza-Chock Citation2020; Irani et al. Citation2010; Lyle Citation2021; Ratto Citation2011; York Citation2018).

In the last decade, there have been increasing efforts within STS to branch out from pure theorisation, thick descriptions and arms-length critique into more action-oriented and politically engaged academic work which seeks to be mutually enriching both to STS researchers and their collaborators. A number of performative, artful and participatory endeavours have emerged under the rubrics of ‘STS in action’, ‘interventionist STS’, ‘STS making and doing’ and other forms of action-oriented research (Downey and Zuiderent-Jerak Citation2016; Jensen Citation2012; Lippert and Mewes Citation2021; Reinsborough Citation2020). Such an overt interventionist impetus is often characterised by methodological inventiveness and a desire to have a more influential role in the fields of study through using or performing STS (Lippert Citation2020).

Experiments and experimental practices have long been a paradigmatic site for studying scientific knowledge production and its controversies, and it is perhaps stemming from this familiarity that experiments have become powerful new ways for researchers to mobilise and use this more action-oriented STS (Law Citation2016; Lezaun, Marres, and Tironi Citation2017). Experimentation with different ways of doing and communicating STS research beyond its own traditional academic audiences has been a recurring approach in this regard (Downey and Zuiderent-Jerak Citation2021; Evans, Leese, and Rychnovská Citation2021). We can see this specifically in the context of data science, where social scientists are invited to be part of interdisciplinary teams and integrated within rather than alongside STEM projects. As part of embedded working arrangements these offer opportunities for more co-productive and deliberative collaboration between data scientists and critical scholars (see e.g. Neff et al. Citation2017).

In his essay ‘In, with and of STS’, Ingmar Lippert (Citation2020) proposes three different modes of using STS: ‘to contribute in STS, to contribute with STS to other fields, or to conduct a study of STS itself’ (4). As we will show, the collaborative encounter we discuss here is more closely attuned to the second use in Lippert’s typology. We sought to test how critical, ethnographic and action-oriented sensibilities can combine to inform practices in data science, with the explicit aim of engendering reflexivity and responsible innovation. While this exercise was not envisioned as a conventional case study aimed at extending existing STS scholarship, it offered an opportunity to flag possible misconceptions and pitfalls of contributing with STS in practice.

Critiquing and intervening in innovation within interdisciplinary projects are not without their challenges and drawbacks. At their core is a tension between, on the one hand, destabilisation and on the other reaching consensus on finding a constructive way forward, in a mutually respectful manner. STEM projects can have clearly defined goals, envisaged outcomes and impacts which the introduction of critique can destabilise. This can create situations of ambivalence, antagonism, disagreement and friction, which could arise from competing values, ontologies, concerns, logics, norms, professional goals, motivations or from conflict between vested interests and socio-ethical considerations. Scholars have recurrently highlighted the ambivalent and sometimes risky nature of taking the role of critics in the context of such interdisciplinary collaborations.

Balancing and reconciling critique with the desire for a consensus-driven, constructive way forward is equally challenging. The positive connotation of this sort of integrative work is sometimes made explicit through stated or intended project or programme outcomes (i.e. ‘ethical AI’, ‘responsible AI’, ‘data for good’) which set the tone from the outset for constructive collaboration (Floridi et al. Citation2018). In our collaboration, as we will go on to describe, this was visible through a stated and visible programmatic commitment to ‘responsible, ethical and inclusive innovation’.Footnote4Yet while consensus might be desirable, it does not always guarantee morally good outcomes and could in fact conceal the power asymmetries between collaborators (Crooks and Currie Citation2021), a concealment that critique aims to make visible. And indeed, consensus is ‘not the inevitable goal of RI’ (Owen and Pansera Citation2019, 34).

The balancing of critique and ‘being constructive’ can have significant political, (inter)personal and professional implications, with a risk that collaborating researchers find themselves in precarious, alienating, co-opted and ambiguous situations. Social scientists in particular have been self-conscious about either being overly complacent and unable to be critical enough; or on the other extreme, enacting an unproductive ‘ethics of suspicion’ (Balmer et al. Citation2015) where social scientists situate themselves on the moral high ground, strategically detached from the practices in question. Research on the effects, affects and politics of collaboration has shed some light on researchers’ experiences of insecurity, anxiety, shame and ambivalence connected with interdisciplinary work (Jönsson and Rådström Citation2013; Wiklund Citation2013). In attending to these issues, agonistic approaches and the confronting of risks and discomforts of collaboration, have been proposed as possible avenues for dialogue and generative critique which do not necessarily rely on consensus (Crooks and Currie Citation2021; Hillersdal et al. Citation2020; Moats Citation2021; Smolka, Fisher, and Hausstein Citation2021).

The contextual specifics of any collaboration are also an important consideration. Our study is set in the context of a data science project aimed at developing AI/ML solutions for misinformation detection and management. This is a technical response to the current ‘post-truth’ environment, which creates a particularly interesting dimension to critique. While the destabilising nature of critique has been an abiding issue in STS (particularly in debates around scientific knowledge production) it has witnessed a renewal in the era of post-truth. In this era, as Michael Lynch (Citation2020, 50) puts it, ‘opposition [to science] is often expressed through the rhetoric of science, voiced by credentialed experts who present counter-narratives and “alternative facts”’. In his well-known essay ‘Why has critique run out of steam?’, Bruno Latour (Citation2004) laments that the recent relativism sparked in public discourse can, at least in part, show similarities with the social construction of facts project advanced by STS. He writes: ‘While we spent years trying to detect the real prejudices hidden behind the appearance of objective statements, do we now have to reveal the real objective and incontrovertible facts hidden behind the illusion of prejudices?’ (Citation2004, 227). The social constructivist argument, Latour asserted, has been misused and has of late been wielded by conspiracy theorists, anti-vaxxers and climate change deniers. Latour suggests critique should shift its analytical lens from matters of fact to matters of concern; a turn to a new realism that aims to redress the potential unintended misconception of social constructivist critiques.Footnote5

This discussion is not only prescient of ongoing public discourse and science legitimacy crises but directly speaks to the particular focus of our RI collaboration, which aimed to use AI/ML to distinguish facts from misinformation. In our collaboration with data scientists, we sought to contend with and overcome this ambivalence. We explored how the critique of the algorithmic construction of facts could be mobilised productively to help explore and address an urgent and significant matter of concern – in this case misinformation – while avoiding unproductive relativism and falling into the trap of endless debate.

Interrogating the algorithmic construction of facts through collaborative ethnography

In this section, we draw on our experience of initiating an RI-inflected study that integrates social studies of science and qualitative methods within a data science project developing a multimodal,Footnote6 ML-enabled tool to assist humans in the moderation of misinformation (for example so-called ‘fake news’) on social media. We discuss the project findings in full elsewhere (Domínguez Hernández et al. Citation2023) here focusing on methodological observations. The study took place under the auspices of a National Research Centre with an interdisciplinary research agenda broadly defined around ‘how to keep people safe online while allowing them to fully participate in digital technologies’.Footnote7

The authors led a cross-cutting ‘responsible and ethical innovation pillar’ for the Centre. The idea for the collaboration grew out of a series of meetings with principal investigators of several funded projects within the Centre which were identified as having an innovation or technology design and development component. After several exploratory discussions, we, along with our two collaborators from the data science project (hereafter the research team), identified a shared matter of concern: how to conceptualise misinformation and minimise its harm. This shared interest presented us with an opportunity to explore the potential to work with STS to attempt to co-develop responsible practices in ML and address the social issue.

We incorporated two linked STS sensibilities into the data science project, firstly a social constructivist understanding of knowledge production and ‘truth’ and secondly, a situated qualitative approach to open black-boxed technical artefacts and practices. The study was designed as a ‘collaborative ethnography’, whereby interpretation, description, critique and self-critique were deployed collectively to evaluate the development of data-driven ML models. This approach to collaboration is not new. It has been used in interdisciplinary projects as a tool to support collaborators to mobilise joint agendas. Bieler et al. (Citation2021, 81) propose that collaborative ethnography ‘contributes to the assemblage of reflexivity as a practice that is distributed across a set of places, people, and encounters’.

In our study, we apply collaborative ethnography in an experimental fashion, with the explicit aim of defining RI in situ as a form of praxis and phronesis (Owen and Pansera Citation2019) rather than importing RI as a pre-given normative framework. The UK research funding council that funded the Centre advocates an approach to responsible innovation based on the so-called ‘AREA’Footnote8 framework, but emphasises the need for a flexible and non-prescriptive approach and that ‘different approaches might be required for different research areas’. We experimented with collaborative ethnography as an approach to RI in a setting that presented a site to test how STS methods and concepts can be brought into conversation with those of data science with the aim of problematising the algorithmic construction of facts and concurrently working out responsible practices in ML and AI.

Our study was necessarily conducted as an open-ended intervention. This was initiated towards the beginning of the technical project – i.e. after this had been funded – and ran in tandem with it, but independently of the deadline and deliverables of the technical project itself. RI was not built into the technical project by design, but the Centre as a whole had a stated and visible commitment to it. Rather, and supported by this broader mandate, we invited the data scientists to enter into a collaboration to expand the initial scope of their project to ask what the potential adverse outcomes of automated misinformation detection are (or could be)? and how we could explore and address these together?

We incorporated regular touch points for discussion and reflection (for more details, see Domínguez Hernández et al. Citation2023). Regular meetings were held over a period of approximately 8 months (the timeframe for developing the ML tool), during which the team discussed the algorithmic construction of misinformation and the ethical, political and epistemic issues arising during the development of ML models and when they are deployed by social media platforms. We asked our data science collaborators to explain their work to us and to be actively involved in collective mind mapping (an example of which is given in ) and writing. This constituted a purposeful strategy that we employed which acknowledges the significance of inscription within academic knowledge production and throughout the ethnographic endeavour (Bieler et al. Citation2021; Lassiter Citation2005; Latour and Woolgar Citation1986).

Figure 1. An early mind map of algorithmic contingencies.

Figure 1. An early mind map of algorithmic contingencies.

A key initial objective for us as social scientists was to develop a rich and shared understanding of the technical process of constructing the machine learning model, in order to collectively interrogate the ontological and epistemological assumptions associated with this. We began by querying how the data scientists defined what counts as misinformation and what counts as truthful information for them. These definitions emerged as important framing constructs for the building of the ‘misinformation classifier model’ and the curation of its referential training and testing dataset – a process we described, following Jaton (Citation2021), as ‘ground truthing’. In sum, this allowed us to interpret the data scientists’ strategy as being the application of machine learning techniques to automate and assist the practice of fact-checking.Footnote9

This set the context for subsequent discussions in which we collectively problematised the decision to take at face value the verdicts (i.e. the facticity) of claims circulating on social media made by fact-checking organisations. These verdicts can have normative implications, i.e. they can inform decisions about tagging, downranking or censoring claims identified by them as being misinformation. Drawing insights from feminist epistemology and social studies of science we underlined that facts and knowledge claims are context-dependent, partial, contingent on power relations, and socially constructed and therefore that claims to neutrality and objectivity in knowledge production and facticity need to be taken cautiously (Haraway Citation2013; Harding Citation1995; Latour and Woolgar Citation1986). Similarly, in more recent debates on the use of big data in scientific research, critical studies have cautioned against approaches that view data as a reflection of ‘things out there’, suggesting instead that data are always contextual and relational (boyd and Crawford Citation2012; Leonelli Citation2015).

The data scientists admitted that one of the problems they had with the ground truth dataset which they used was that fact-checkers do not always agree on their assessment of the same claims and that this introduces uncertainties in terms of labelling and classifying inputs within the ML model. We asked the data scientists to spell out how these ambiguities translated into the design of the classification algorithm and how they impacted the model’s metrics of performance. They enumerated the hypotheses, assumptions and previous studies that they used in order to determine the presence of misinformation, and which help to classify pieces of content as such using ML. Some of these assumptions included: ‘misinformation spreads faster on social media than factual claims’; ‘users who discuss misinformation on social media are different from those discussing factual information judging by their followers’; ‘the images used when discussing misinformation are different to those used when discussing factual information’.

Our collaborators explained to us the importance of metrics of performance – accuracy in particular – and how these were employed to help evaluate incremental progress within the ML subfield of misinformation detection. These metrics serve as benchmarks against which new models are assessed and are usually reported in academic publications independently of models being put into production. The data scientists remarked that there is currently no systematic or standardised approach to reporting progress in the misinformation detection subfield, and that different techniques, genres and topical domains of misinformation are commonly evaluated against the same benchmark. For example, the performance of a model designed to detect medical misinformation could be compared against that of a model detecting fake news during elections. They used the analogy of a ‘wild west’ to illustrate the lack of consensus around evaluation benchmarks in the misinformation research community.

We interpreted such use of performance metrics as a form of internal validation within the subfield rather than as expressions of algorithms’ actual utility in the world. At the same time, we argued that these metrics can be influential and performative. This is the case when they are invoked by actors such as social media platforms to vouch for the validity of an algorithm and normatively inform the need for human intervention and what decisions can be triggered by that algorithm.

Stemming from these deliberations, the research team co-produced a combination of interpretative and descriptive texts, mind maps and schematics of the steps involved in the construction of an ML classification model. We found this process of visualisation and inscription to be particularly helpful in promoting both (technical) understanding and reflection (). As our collective discussions continued, the team were able to build on this to surface a series of what we termed ‘algorithmic contingencies’ and cautions. These include for example, that false positives could potentially lead to unfair censorship of content or have the backfiring effect of the reinforcing of entrenched beliefs; and in turn, that false negatives could allow for misinformation to spread further, which is aggravated when there is an overreliance on algorithmic moderation (see Domínguez Hernández et al. Citation2023). Algorithmic contingencies served as signposts to moments in the development of the model where different choices, assumptions or random events could lead to different model outcomes including negative ones.

Figure 2. Schematic of the process of an ML misinformation detection model. Reproduced from (Domínguez Hernández et al. Citation2023).

Figure 2. Schematic of the process of an ML misinformation detection model. Reproduced from (Domínguez Hernández et al. Citation2023).

Mobilising the second-order reflexivity dimension of RI (Owen, Bessant, and Heintz Citation2013), we also situated content moderation algorithms in the contemporary political economy of data. We discussed the potential conflicts of social media companies between the need to address misinformation (and other problematic content online) and profiting from its wide circulation and consumption. We argued that the role of misinformation detection algorithms cannot be considered in isolation from the targeted advertising business model of social media companies (Zuboff Citation2019) given that these algorithmic tools play a key role in fulfilling the moral and legal responsibilities of such companies, and their incentives to minimise human involvement in content moderation (CDEI Citation2021).

The overarching critical assessment that resulted from the collaborative ethnographic process was that ML misinformation classifiers and the functional claims made about them have significant ethical, political and epistemic implications which are not commonly discussed in technical papers. The social impact of these ML models is contingent on a series of underlying (epistemic) assumptions, definitions, technical constraints, performance evaluations, and data curation choices made by developers, and, indirectly also on the vested interests of implicated information gatekeepers. These in turn allowed us to develop a set of collective recommendations aimed at responsible innovation within the subfield e.g. relating to transparency and reporting. This was enacted in the project through the making available of the data collection system that was on GitHub (Nielsen and McConville Citation2022) ‘so others can see and execute the exact code used to build the dataset, and thus all decisions that were made’ (Domínguez Hernández et al. Citation2023, 18)

Dealing with the AI can of worms: moving from critique to action

Opening the ‘black box’ of technical systems has been an emblematic metaphor for ethnographic explorations of laboratories and highly specialised technical practices. However, considerably less attention has been given to addressing/resolving the dilemmas or new problematics revealed by such acts of opening, interpreting, and translating, and relatedly, to exploring whether and how critical inquiry could at the same time be generative. In the following, we discuss how the research team collectively confronted the critiques raised in the first stages of the intervention that are discussed above. We then offer three methodological and practical lessons for interdisciplinary teams to move from critique to action while collectively pursuing responsible innovation. These can be viewed as non-prescriptive strategies to navigate moments of friction, conflict, and some of the dilemmas arising from doing critical work.

Collective self-critique as a tool to resist techno-centrism and relativism

Although the data scientists recognised the ethical and social issues surfaced by the study, they deemed them to be out of the remit of the data-science project as initially formulated. Being a technical and solution-oriented project, and bound to a relatively short timeframe, our inquiry necessarily prompted an extension of scope and concern. This is not to say that our interlocutors eluded the limitations of their methods. In fact, the opposite was the case: they were both predisposed and open to reflection on these. However, they were working within a rationale that the merit of ML detection models could be evaluated in terms of accuracy and performance and that better (i.e. more balanced) and larger ground truth data could help to offset the tool’s limitations in the future. By contrast, as social scientists, we argued that these issues could not be addressed merely through larger datasets and that the functional claims about the model’s practical utility should be nuanced and transparent in terms of their assumptions, limitations, and risks. We were able to collectively explore how we can reconcile these epistemological differences, in part due to the mandate for responsible innovation (and resources) provided for a programme of responsible innovation within the Centre, but largely because of the openness and willingness of the team to collaborate together: this collaboration was viewed by all as a clear and beneficial opportunity.

In response to the critiques laid out earlier, one of our collaborators remarked: ‘We have opened a can of worms, what do we do about this?’. This statement offers a perhaps more apt metaphor to the classic ‘opening the black box’ and is an earnest reminder of the need for making visible and interrogating not only the practices of data scientists, but of STS inquiry itself and the aim to ‘contribute with STS to other fields’. As Lynch (Citation2000, 26–27) cautions, the practice of reflexivity risks being used as ‘a source of superior insight’ held by social scientists in relation to their ‘unreflective’ counterparts. As compared to the conventional setting of the ethnographer and informant, collaborative ethnography, we suggest, can pave a path for all participants to be reflective of their own epistemological stances, prejudices and potential biases in ways that are both critical and productive, rather than antagonising. As such, the exercise can be setup towards working out co-responsibilities relating to how and whether and how to act upon any critical findings.

For example, as part of our ongoing discussions, we asked within the research team what are the implications of flagging the shortcomings of classification algorithms? To what extent is opening the ‘can of worms’ of ML and calling out its limitations a generative (enough) process? How do we avoid critique becoming either unproductive relativism or instrumentalised as mere ethics washing associated with techno-centric solutions? Beyond listing the limitations of algorithms, can critical researchers take some degree of co-responsibility for the solutions being proposed to tackle the social problem? Can in total we contribute pragmatically to reducing the harm of misinformation as an overarching key ‘matter of concern’?

In this regard, the tensions raised during the collaboration were far from being fully resolved, and we argue did not need to be. However, in collectively working through these questions and negotiating the next steps, we were able to look for mutually enriching modes of integrating the pragmatic objectivism of data science and the reflexive approach of STS. While we acknowledged the potential benefits of the technology to tackle a social issue – harmful misinformation – we also pressed that to minimise potential negative effects of content classification algorithms, developers need to be reflexive and critical about their choices of training data, the assumptions behind these choices and the potential failures of their creations.

We agreed on a tentative way forward by suggesting practical actions aimed at expanding the existing scope of algorithm evaluation and working towards meaningful transparency about the limitations of the ML tool. For instance, we concurred that the publication of ML classification models should be accompanied by transparency reports containing not only the rationale behind data curation and problem definition, but statements about potential limitations stemming from researchers’ values, epistemic assumptions, political stances and institutional commitments. We proposed that an interdisciplinary research team would be better equipped to decide on what specifically to include in such transparency reports.

Ultimately, allocating co-responsibilities is crucial, and it does not mean collaborating researchers must address all the social and ethical issues being surfaced. Instead, engaging in collective acts of reflexivity offers a concrete way to situate the technical solution in its wider context, in this case in relation to the political economy of data, fact-checking, social media platforms and surveillance capitalism.

From strategic vagueness to co-production

At the start of the study, all researchers expressed a lack of familiarly with the others’ discipline-specific methods, language and practices, and it was unclear what the collaboration would entail. In our initial discussions, we focused on addressing questions and common matters of concern for both fields and experimenting with how to better address these instead of settling on specific methods and goals. Despite such a lack of preliminary definitions the data scientists welcomed our invitation and agreed to commit time to the study.

Questions about who would lead this effort and how it would be disseminated were raised early. We agreed that the social scientists, who proposed the collaboration and identified the gaps in the data science project in the first place, would be better placed to lead the collaboration. This agreement meant that the social scientists were responsible for the research design, organising data collection and analysis, facilitating discussions, and steering the overall direction of the study. In turn, the data scientists took the role of reflexive ML experts, consulting on technical methods, practices, and the state of the art in their field, but also providing feedback to help enrich the ethnographic exploration and taking an active part in scoping and formulating recommendations through joint deliberation.

This open-ended, experimental arrangement offered a flexible approach to collaboration, but it also posed challenges in terms of deadlines and impact. In our role as instigators of the study, we sought not only to highlight what was problematic about automation but to explore realistic opportunities for intervention and publication of results, and to contribute meaningfully to addressing the matter of concern we had collectively identified. As such, we often probed our collaborators for tangible ways in which the study could have an impact, be that in the form of reformulated practices and assumptions, or an explicit recognition of AI’s limitations and potential harms in technical papers.

Our collaboration allowed for the elucidation of key contingencies and recommendations for addressing these. While some of the key contingencies identified through the inquiry were well recognised beforehand and addressed by the data scientists, it was not clear how and when these concerns would be integrated into academic articles and datasets. Indeed, the initial outputs of the technical project were published following the original expectations and timeframe of the technical project and as such did not include an explicit reference to the RI intervention or insights from this, which were published later. Mismatched temporalities between the time-constrained technical project and the more open-ended nature of the RI intervention were an issue in this regard and remain we suggest an organisational issue for future interdisciplinary ventures of this kind. Given the technical project’s own deadlines and pre-defined publication goals, some of the findings and recommendations from the study, which were produced at later stages, could only be acknowledged retrospectively.

Ethnography is indeed known to be a slow and evolving methodology, reliant on co-location and continued commitment to the field, with some of the most emblematic studies spanning several years and even decades (Beaulieu Citation2010; Palmer, Pocock, and Burton Citation2018). Our approach to collaborative ethnography was helpful in reducing the time needed for data collection and analysis, and it also provided for all members of the research team a generative space for reflection, dialogue and mutual learning, albeit transient (Felt et al. Citation2016). Yet as with any exploratory and time-constrained project, the outputs of intervention were necessarily limited to partial and tentative recommendations. We laid out an inventory of remaining research gaps for the field of content moderation as well as pointers for further research and debate on how to deal with misinformation and harmful content.

Translation and inscription: using co-authoring as method

From the outset, co-authoring an academic publication served as a good motivation to initiate and maintain the collaboration, despite the uncertainty with regard to specific outcomes (c.f. Bieler et al. Citation2021). In practice, the process involved iterative cycles of visualisation, reflection and co-writing, where interpretative texts produced by the social scientists were routinely checked, refined and expanded by the data scientists. Most of the learning from the ethnographic exploration of the practices within the data science project was documented in provisional interpretative fieldnotes, diagrams and analyses which were iteratively exchanged with the data scientists for their input.

In the process of building a fair representation of the ML algorithm and a diagnosis of ethical issues, data scientists reviewed the proposed argument, made corrections where needed, and incorporated detailed and technical explanations of the process. This was an important phase of building technical understanding and clarification for the social scientists. The resulting text was a mix of discipline-specific styles and vocabularies; interpretative as well as descriptive, technical text.

Interim drafts of the co-authored text described the technical justification for the soundness of an AI/ML technique, such as accuracy and improved performance over previous comparable models in the data science literature, or bigger and more diverse datasets compared to existing ones. In further edition cycles, we attuned the text to a more (auto-)ethnographic style to combine the contributions of social scientists and data scientists in a single coherent piece. This entailed a process of finding common descriptors, concepts and terminologies – translating and inscribing technical processes and objects aimed at a broader, less specialised and interdisciplinary audience.

Inevitably, this type of interdisciplinary co-authorship raises ethical and political considerations. As the lead authors who were simultaneously writing and suggesting ways of framing the argument with the conceptual tools of critical social science, we were conscious of our own positionality. In encouraging collaborators to engage with and stand by the critique, there is a risk of imposing a narrative, speaking on behalf, and even co-opting or alienating collaborators in the process. To address this issue, we allowed sufficient time and opportunities for all collaborators to table disagreements, concerns or potential misrepresentations in working versions of the co-authored text. This process was particularly crucial when delineating a plan of action and a joint agenda of research or surfacing wider socio-political issues.

Lastly, we were faced with the question of finding an appropriate venue to publish our collaborative work. The challenges of peer-reviewing and publishing interdisciplinary research are well-known and documented in the literature, and are far from settled at the social-computational sciences nexus (McLeish and Strang Citation2016). Dedicated venues for interdisciplinary work are limited, and while venues like FAccT and AIES have shown openness to social science methods, concerns remain over the deficit in peer reviewers’ experience with methods and perspectives which are outside the dominant STEM fields (Laufer et al. Citation2022). Publishing is also crucially shaped by the pressures of funding bodies, academic impact and career development plans. For early career researchers in particular, decisions as to the structure of papers, shared authorship, where and when to publish, and how to communicate research to intended audiences can have significant career and interpersonal implications which necessarily inflect the writing process and outcomes (Kaltenbrunner et al. Citation2022; Nästesjö Citation2021). In our case, given that our research design used social scientific theory and qualitative methods, we favoured interdisciplinary venues which lean towards the social sciences but target a wide multidisciplinary readership. The co-authored paper was ultimately presented at an interdisciplinary conference (FAccT) and published in a social science journal – the JRI (Domínguez Hernández et al. Citation2023).

Conclusion

Critical scholars have examined the social, political and epistemic entanglements and implications of AI, big data and algorithmic systems emphasising the need for more scrutiny, regulation and transparency (Gillespie Citation2014; Gorwa, Binns, and Katzenbach Citation2020; Kitchin Citation2014; Martin Citation2015). While this scholarship has contributed significantly to the debate on the need for algorithmic governance, questions around the utility of such critiques for practitioners concerned with responsible innovation remain largely unaddressed. The calls for operationalising AI ethics and responsible innovation have certainly permeated technology companies where significant efforts have been directed at establishing ethics committees, oversight boards, responsible AI programmes, and ‘fair’ technology research and development. Yet academics remain sceptical about the widespread solution-oriented view among technology companies and practitioners to tackle the ethical problems with data-driven technology (Bietti Citation2021; Hu Citation2021). Such a trust deficit risks devaluing responsible innovation as being merely an aspirational endeavour and demands that critical social science engage more closely with the technical practices, assumptions and incentives that undergird the construction of algorithmic systems. Very often approaches to AI ethics are viewed as either ‘toothless’ or deployed from a presumed privileged vantage point relative to that of practitioners, while offering limited or no alternatives (Rességuier and Rodrigues Citation2020). It is therefore becoming increasingly urgent to explore how critical studies of AI and other data-driven technologies can avoid being trapped between antagonising those involved in design and becoming instrumentalised as tick box or ‘ethics washing’ exercises.

Without offering to settle this dilemma, here we wished to approach it experimentally and collaboratively. We deployed STS insights and methods in an ML project to address a shared ‘matter of concern’: the harm that can be caused by misinformation and associated with this, issues of facticity and its relations with algorithmic classification of misinformation. Our intervention was an attempt at both critiquing and constructively influencing the development of an ML tool with the aim of minimising harm while attending to the social problem at hand. We were driven by an explicit commitment to help problematise, improve, adapt or reimagine AI systems that could have significant impacts on society whilst also considering our responsibilities, as social scientists, of doing such critical work. For instance, we were conscious of the risk that the intervention could be viewed as ethics policing or an ad hoc ethics review. This was no less risky when the concrete benefits of the collaboration were neither certain nor guaranteed from the outset. That said, the experimental setting embraces the uncertain and evolving nature of interdisciplinary collaboration and thereby lends itself to adjustment as needed (Greiffenhagen, Mair, and Sharrock Citation2015) and – in our case – an ‘in situ’ empirical tinkering with responsible innovation.

Pre-requisites for such collaboration we suggest are open-mindedness and mutual respect across the research team (for each other’s knowledge and expertise). Despite the uncertainties relating to the collaboration, we entered the challenge with the knowledge that our collaborators were not only open to interdisciplinary work but saw its potential benefits as a mutual learning venture.

We draw three practical and methodological lessons which we hope can be of use for interdisciplinary teams concerned with embedding responsible practices in AI/ML. First, in line with previous experiences with collaborative ethnography, we emphasise the importance of collective self-critique – what Bieler et al. (Citation2021) term ‘distributing reflexivity’ – as a key step in moving from critique to action within responsible innovation pursuits. A way to support this would be for ML researchers to engage with those with knowledge and skills in reflective practice such as social scientists and STS scholars. This goes hand in hand with our second lesson on moving from a strategic vagueness during the initial stages of collaboration, through a phase of understanding (e.g. of the technical elements of the project) and reflection (on its contingencies) to a negotiation of pragmatic synthesis and tangible outcomes, even if these materialise as provisional findings and pointers to further joint projects. Finally, when involving collaborators in the process of inscription and in the co-production of the research outcomes, collaborative ethnography offers an opportunity for all participants to jointly craft, refine and ultimately own the critiques and calls to action. We found that evaluating the success of such interactive writing projects should focus not only on the final product, but on the possibilities for collaborators to learn from one another as part of the inscription process, recognising the power dynamics at play, and finding acceptable terms for collaboration, even in the face of disagreement.

Additional information

Funding

This work was supported by the UK Research and Innovation [grant number EP/V011189/1].

Notes

1 In the rest of the paper, we use ‘interdisciplinary’ as a broad concept, noting the number of related concepts that exist (e.g. ‘multi, ‘cross’ and ‘trans’ disciplinary). Rather than examining these existing definitions, here we wish to explore empirically some of the features commonly attributed to them: integrating, linking, interacting, transcending, transgressing and transforming the boundaries between and beyond disciplines. For further reading on the concepts of inter- and transdiscipilinarity, see Alvargonzález (Citation2011).

2 In the case of autonomous vehicles for instance there has been a strong trend toward applying the trolley problem to determine legal liability and operationalise ethics in AI (see Wu Citation2020).

3 For example, as stated in the 2023 instance of AIES: ‘International organizations, governments, universities, corporations, and philanthropists have recognized this need to embark on an interdisciplinary investigation to help chart a course through the new territory enabled by AI. Earlier iterations of this conference and others have seen the first fruits of these calls to action, as programs for research have been set out in many fields relevant to AI, Ethics, and Society’ (AIES Citation2023, para. 1).

5 We note that we do not claim there is a direct causal link between the current ‘post-truth’ phenomenon and the early practice of methodological symmetry within the field of STS (see Lynch Citation2020). We thank one of the anonymous reviewers for flagging this.

6 i.e. based on text and images.

8 AREA stands for ‘Anticipate, Reflect, Engage and Act’ (www.ukri.org/who-we-are/epsrc/our-policies-and-standards/framework-for-responsible-innovation/).

9 To be sure, the proposed ML model used annotated, multimodal data (i.e. texts and images) obtained from Google’s Fact Check Explorer – which aggregates claims that have been fact-checked by hundreds of news organisations around the world – as well as from Twitter, which provides access to data about user engagements with news.

References

  • AIES. 2023. “Call for Papers.” https://www.aies-conference.com/2023/call-for-papers/.
  • Alvargonzález, D. 2011. “Multidisciplinarity, Interdisciplinarity, Transdisciplinarity, and the Sciences.” International Studies in the Philosophy of Science 25 (4): 387–403. https://doi.org/10.1080/02698595.2011.623366.
  • Balmer, A. S., J. Calvert, C. Marris, S. Molyneux-Hodgson, E. Frow, M. Kearnes, K. Bulpin, P. Schyfter, A. MacKenzie, and P. Martin. 2015. “Taking Roles in Interdisciplinary Collaborations: Reflections on Working in Post-ELSI Spaces in the UK Synthetic Biology Community.” Science & Technology Studies 28 (3): 3–25. https://doi.org/10.23987/sts.55340
  • Balmer, A. S., J. Calvert, C. Marris, S. Molyneux-Hodgson, E. Frow, M. Kearnes, K. Bulpin, P. Schyfter, A. Mackenzie, and P. Martin. 2016. “Five Rules of Thumb for Post-ELSI Interdisciplinary Collaborations.” Journal of Responsible Innovation 3 (1): 73–80. https://doi.org/10.1080/23299460.2016.1177867.
  • Beaulieu, A. 2010. “From Co-Location to Co-Presence: Shifts in the Use of Ethnography for the Study of Knowledge.” Social Studies of Science 40 (3): 453–470. https://doi.org/10.1177/0306312709359219
  • Bieler, P., M. D. Bister, J. Hauer, M. Klausner, J. Niewöhner, C. Schmid, and S. von Peter. 2021. “Distributing Reflexivity Through Co-Laborative Ethnography.” Journal of Contemporary Ethnography 50 (1): 77–98. https://doi.org/10.1177/0891241620968271.
  • Bietti, E. 2021. “From Ethics Washing to Ethics Bashing: A Moral Philosophy View on Tech Ethics.” Journal of Social Computing 2 (3): 266–283. https://doi.org/10.23919/JSC.2021.0031.
  • Binns, R., M. Veale, M. Van Kleek, and N. Shadbolt. 2017. “Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation.” In Social Informatics, edited by G. L. Ciampaglia, A. Mashhadi, and T. Yasseri, 405–415. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-67256-4_32
  • Birhane, A. 2021. “Algorithmic Injustice: A Relational Ethics Approach.” Patterns 2 (2): 100205. https://doi.org/10.1016/j.patter.2021.100205.
  • boyd, danah, and K. Crawford. 2012. “Critical Questions for Big Data.” Information, Communication & Society 15 (5): 662–679. https://doi.org/10.1080/1369118X.2012.678878.
  • Brayne, S. 2017. “Big Data Surveillance: The Case of Policing.” American Sociological Review 82 (5): 977–1008. https://doi.org/10.1177/0003122417725865.
  • Bromham, L., R. Dinnage, and X. Hua. 2016. “Interdisciplinary Research Has Consistently Lower Funding Success.” Nature 534 (7609): 684–687. https://doi.org/10.1038/nature18315.
  • CDEI. 2021. “The Role of AI in Addressing Misinformationon Social Media Platforms.” Centre for Data Ethics and Innovation. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1008700/Misinformation_forum_write_up__August_2021__-_web_accessible.pdf.
  • Conley, S. N., B. Tabas, and E. York. 2022. “Futures Labs: A Space for Pedagogies of Responsible Innovation.” Journal of Responsible Innovation 10 (1): 1–20. https://doi.org/10.1080/23299460.2022.2129179.
  • Costanza-Chock, S. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. Cambridge, MA: The MIT Press. https://library.oapen.org/handle/20.500.12657/43542.
  • Crooks, R., and M. Currie. 2021. “Numbers Will Not Save Us: Agonistic Data Practices.” The Information Society 37 (4): 201–213. https://doi.org/10.1080/01972243.2021.1920081.
  • Delgado, A., and H. Åm. 2018. “Experiments in Interdisciplinarity: Responsible Research and Innovation and the Public Good.” PLoS Biology 16 (3): e2003921. https://doi.org/10.1371/journal.pbio.2003921.
  • Domínguez Hernández, A., and V. Galanos. 2022. “A Toolkit of Dilemmas: Beyond Debiasing and Fairness Formulas for Responsible AI/ML.” IEEE International Symposium on Technology and Society 2022 (ISTAS22). https://doi.org/10.1109/ISTAS55053.2022.10227133.
  • Domínguez Hernández, A., R. Owen, D. S. Nielsen, and R. McConville. 2023. “Ethical, Political and Epistemic Implications of Machine Learning (Mis)Information Classification: Insights from an Interdisciplinary Collaboration Between Social and Data Scientists.” Journal of Responsible Innovation 10 (1): 2222514. https://doi.org/10.1080/23299460.2023.2222514.
  • Downey, G. L., and T. Zuiderent-Jerak. 2016. “8  Making and Doing: Engagement and Reflexive Learning in STS.” In C. A. (30).
  • Downey, G. L., and T. Zuiderent-Jerak, eds. 2021. Making & Doing: Activating STS Through Knowledge Expression and Travel. Cambridge, MA: The MIT Press. https://doi.org/10.7551/mitpress/11310.001.0001
  • EPSRC. 2018. “Framework for Responsible Innovation.” https://www.ukri.org/about-us/epsrc/our-policies-and-standards/framework-for-responsible-innovation/.
  • Eubanks, V. 2017. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. 1st ed. New York, NY: StMartin's Press
  • Evans, S. W., M. Leese, and D. Rychnovská. 2021. “Science, Technology, Security: Towards Critical Collaboration.” Social Studies of Science 51 (2): 189–213. https://doi.org/10.1177/0306312720953515.
  • Feldman, M., S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. 2015. “Certifying and Removing Disparate Impact.” In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 259–268. https://doi.org/10.1145/2783258.2783311.
  • Felt, U., J. Igelsböck, A. Schikowitz, and T. Völker. 2016. “Transdisciplinary Sustainability Research in Practice: Between Imaginaries of Collective Experimentation and Entrenched Academic Value Orders.” Science, Technology, & Human Values 41 (4): 732–761. https://doi.org/10.1177/0162243915626989.
  • Fisher, E., R. L. Mahajan, and C. Mitcham. 2006. “Midstream Modulation of Technology: Governance from Within.” Bulletin of Science, Technology & Society 26 (6): 485–496. https://doi.org/10.1177/0270467606295402.
  • Fisher, E., M. O’Rourke, R. Evans, E. B. Kennedy, M. E. Gorman, and T. P. Seager. 2015. “Mapping the Integrative Field: Taking Stock of Socio-Technical Collaborations.” Journal of Responsible Innovation 2 (1): 39–61. https://doi.org/10.1080/23299460.2014.1001671.
  • Floridi, L., J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum, C. Luetge, et al. 2018. “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.” Minds and Machines 28 (4): 689–707. https://doi.org/10.1007/s11023-018-9482-5.
  • Geisslinger, M., F. Poszler, J. Betz, C. Lütge, and M. Lienkamp. 2021. “Autonomous Driving Ethics: From Trolley Problem to Ethics of Risk.” Philosophy & Technology 34 (4): 1033–1055. https://doi.org/10.1007/s13347-021-00449-4.
  • Gillespie, T. 2014. “The Relevance of Algorithms.” In Media Technologies, edited by T. Gillespie, P. J. Boczkowski, and K. A. Foot, 167–194. Cambridge, MA: The MIT Press. https://doi.org/10.7551/mitpress/9780262525374.003.0009.
  • Gorwa, R., R. Binns, and C. Katzenbach. 2020. “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance.” Big Data & Society 7 (1): 205395171989794. https://doi.org/10.1177/2053951719897945.
  • Greiffenhagen, C., M. Mair, and W. Sharrock. 2015. “Methodological Troubles as Problems and Phenomena: Ethnomethodology and the Question of ‘Method’ in the Social Sciences.” The British Journal of Sociology 66 (3): 460–485. https://doi.org/10.1111/1468-4446.12136.
  • Griffin, G., K. Hamberg, and B. Lundgren. 2013. The Social Politics of Research Collaboration. London, UK: Taylor & Francis. http://ebookcentral.proquest.com/lib/bristol/detail.action?docID=1211739.
  • Guston, D. H., and D. Sarewitz. 2002. “Real-Time Technology Assessment.” Technology in Society 24 (1-2): 93–109. https://doi.org/10.1016/S0160-791X(01)00047-1.
  • Haraway, D. 2013. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” In Simians, Cyborgs, and Women. New York: Routledge. https://doi.org/10.4324/9780203873106-18.
  • Harding, S. 1995. “‘Strong Objectivity’: A Response to the New Objectivity Question.” Synthese 104 (3): 331–349. https://doi.org/10.1007/BF01064504
  • Hillersdal, L., A. P. Jespersen, B. Oxlund, and B. Bruun. 2020. “Affect and Effect in Interdisciplinary Research Collaboration.” Science & Technology Studies 33 (2): 66–82. https://doi.org/10.23987/sts.63305.
  • Hu, L. 2021. “Tech Ethics: Speaking Ethics to Power, or Power Speaking Ethics?” Journal of Social Computing 2 (3): 238–248. https://doi.org/10.23919/JSC.2021.0033.
  • Irani, L., J. Vertesi, P. Dourish, K. Philip, and R. E. Grinter. 2010. “Postcolonial Computing: A Lens on Design and Development.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1311–1320. https://doi.org/10.1145/1753326.1753522.
  • Jaton, F. 2021. “Assessing Biases, Relaxing Moralism: On Ground-Truthing Practices in Machine Learning Design and Application.” Big Data & Society 8 (1): 205395172110135. https://doi.org/10.1177/20539517211013569.
  • Jensen, T. E. 2012. “Intervention by Invitation: New Concerns and New Versions of the User in STS.” Science & Technology Studies 25 (1): 13–36. https://doi.org/10.23987/sts.55279.
  • Jönsson, M., and A. Rådström. 2013. “Experiences of Research Collaboration in ‘Soloist’ Disciplines: On the Importance of Not Knowing and Learning from Affects of Shame, Ambivalence and Insecurity.” In The Emotional Politics of Research Collaboration, edited by G. Griffin and A. Bränström-Öhman, 130–143. New York: Routledge. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-66761.
  • Kaltenbrunner, W., K. Birch, T. van Leeuwen, and M. Amuchastegui. 2022. “Changing Publication Practices and the Typification of the Journal Article in Science and Technology Studies.” Social Studies of Science 52 (5): 758–782. https://doi.org/10.1177/03063127221110623.
  • Katell, M., M. Young, D. Dailey, B. Herman, V. Guetler, A. Tam, C. Bintz, D. Raz, and P. M. Krafft. 2020. “Toward Situated Interventions for Algorithmic Equity: Lessons from the Field.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 45–55. https://doi.org/10.1145/3351095.3372874.
  • Kitchin, R. 2014. “Big Data, New Epistemologies and Paradigm Shifts.” Big Data & Society 1 (1): 205395171452848. https://doi.org/10.1177/2053951714528481.
  • Kleinberg, J., J. Ludwig, S. Mullainathan, and A. Rambachan. 2018. “Algorithmic Fairness.” AEA Papers and Proceedings 108: 22–27. https://doi.org/10.1257/pandp.20181018.
  • Lassiter, L. E. 2005. The Chicago Guide to Collaborative Ethnography. Chicago, IL: University of Chicago Press.
  • Latour, B. 2004. “Why Has Critique Run out of Steam? From Matters of Fact to Matters of Concern.” Critical Inquiry 30 (2): 225–248. https://doi.org/10.1086/421123.
  • Latour, B., and S. Woolgar. 1986. Laboratory Life: The Social Construction of Scientific Facts. Sage Library of Social Research; Vol. 80. Beverly Hills: Sage Publications.
  • Laufer, B., S. Jain, A. F. Cooper, J. Kleinberg, and H. Heidari. 2022. “Four Years of FAccT: A Reflexive, Mixed-Methods Analysis of Research Contributions, Shortcomings, and Future Prospects.” In 2022 ACM Conference on Fairness, Accountability, and Transparency, 401–426. https://doi.org/10.1145/3531146.3533107.
  • Law, J. 2016. “STS as Method.” In The Handbook of Science and Technology Studies, edited by U. Felt, S. Beck, R. Fouché, C. A. Miller, L. Smith-Doerr, M. Alac, S. Amir, M. Arribas-Ayllon, B. Balmer, and J. Barandiarán, 31–57. Cambridge, MA: MIT Press. http://ebookcentral.proquest.com/lib/ed/detail.action?docID=5052910
  • Leonelli, S. 2015. “What Counts as Scientific Data? A Relational Framework.” Philosophy of Science 82 (5): 810–821. https://doi.org/10.1086/684083.
  • Lezaun, J., N. Marres, and M. Tironi. 2017. “7  Experiments in Participation.” In The Handbook of Science and Technology Studies 4th ed., 195–221.
  • Lindvig, K., and L. Hillersdal. 2019. “Strategically Unclear? Organising Interdisciplinarity in an Excellence Programme of Interdisciplinary Research in Denmark.” Minerva 57 (1): 23–46. https://doi.org/10.1007/s11024-018-9361-5.
  • Lippert, I. 2020. “In, with and of STS.” In Locating Media/Situierte Medien, edited by A. Wiedmann, K. Wagenknecht, P. Goll, and A. Wagenknecht, 1st ed., Vol. 19, 301–318. Bielefeld: Transcript Verlag. https://doi.org/10.14361/9783839443798-011.
  • Lippert, I., and J. S. Mewes. 2021. “Data, Methods and Writing: Methodographies of STS Ethnographic Collaboration in Practice.” Science & Technology Studies 34 (3): 2–16. https://doi.org/10.23987/sts.110597.
  • Lyall, C., A. Bruce, W. Marsden, and L. Meagher. 2013. “The Role of Funding Agencies in Creating Interdisciplinary Knowledge.” Science and Public Policy 40 (1): 62–71. https://doi.org/10.1093/scipol/scs121.
  • Lyle, K. 2021. “Interventional STS: A Framework for Developing Workable Technologies.” Sociological Research Online 26 (2): 410–426. https://doi.org/10.1177/1360780420915723.
  • Lynch, M. 2000. “Against Reflexivity as an Academic Virtue and Source of Privileged Knowledge.” Theory, Culture & Society 17 (3): 26–54. https://doi.org/10.1177/02632760022051202.
  • Lynch, M. 2020. “We Have Never Been Anti-Science: Reflections on Science Wars and Post-Truth.” Engaging Science, Technology, and Society 6 (1): 49–57. https://doi.org/10.17351/ests2020.309.
  • Manokha, I. 2018. “Surveillance, Panopticism, and Self-Discipline in the Digital Age.” Surveillance & Society 16 (2): 219–237. https://doi.org/10.24908/ss.v16i2.8346.
  • Martin, K. E. 2015. “Ethical Issues in the Big Data Industry.” MIS Quarterly Executive 14 (2): 67–85.
  • McLeish, T., and V. Strang. 2016. “Evaluating Interdisciplinary Research: The Elephant in the Peer-Reviewers’ Room.” Palgrave Communications 2 (1): Article 1. https://doi.org/10.1057/palcomms.2016.55.
  • Miceli, M., and J. Posada. 2022. “The Data-Production Dispositif.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW2): 460:1–460:37. https://doi.org/10.1145/3555561.
  • Moats, D. 2021. “Rethinking the ‘Great Divide’: Approaching Interdisciplinary Collaborations Around Digital Data with Humour and Irony.” Science & Technology Studies 34 (1): 19–42. https://doi.org/10.23987/sts.97321.
  • Moats, D., and N. Seaver. 2019. ““You Social Scientists Love Mind Games”: Experimenting in the ‘Divide’ Between Data Science and Critical Algorithm Studies.” Big Data & Society 6 (1): 205395171983340. https://doi.org/10.1177/2053951719833404.
  • Nästesjö, J. 2021. “Navigating Uncertainty: Early Career Academics and Practices of Appraisal Devices.” Minerva 59 (2): 237–259. https://doi.org/10.1007/s11024-020-09425-2.
  • Neff, G., A. Tanweer, B. Fiore-Gartland, and L. Osburn. 2017. “Critique and Contribute: A Practice-Based Framework for Improving Critical Data Studies and Data Science.” Big Data 5 (2): 85–97. https://doi.org/10.1089/big.2016.0050.
  • Nielsen, D. S., and R. McConville. 2022. “MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset.” In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 3141–3153. https://doi.org/10.1145/3477495.3531744
  • Noble, S. U. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. http://ebookcentral.proquest.com/lib/ed/detail.action?docID=4834260.
  • Owen, R. 2014. “The UK Engineering and Physical Sciences Research Council’s Commitment to a Framework for Responsible Innovation.” Journal of Responsible Innovation 1 (1): 113–117. https://doi.org/10.1080/23299460.2014.882065.
  • Owen, R., J. R. Bessant, and M. Heintz 2013. Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society. Chichester: John Wiley & Sons, Incorporated.
  • Owen, R., and M. Pansera. 2019. “Responsible Innovation and Responsible Research and Innovation.” In Handbook on Science and Public Policy, edited by D. Simon, S. Kuhlmann, J. Stamm, and W. Canzler, 26–48. Cheltenham, UK: Edward Elgar Publishing. https://www.elgaronline.com/edcollchap/edcoll/9781784715939/9781784715939.00010.xml.
  • Owen, R., M. Pansera, P. Macnaghten, and S. Randles. 2021. “Organisational Institutionalisation of Responsible Innovation.” Research Policy 50 (1): 104132. https://doi.org/10.1016/j.respol.2020.104132.
  • Palmer, J., C. Pocock, and L. Burton. 2018. “Waiting, Power and Time in Ethnographic and Community-Based Research.” Qualitative Research 18 (4): 416–432. https://doi.org/10.1177/1468794117728413.
  • Pansera, M., R. Owen, D. Meacham, and V. Kuh. 2020. “Embedding Responsible Innovation Within Synthetic Biology Research and Innovation: Insights from a UK Multi-Disciplinary Research Centre.” Journal of Responsible Innovation 7 (3): 384–409. https://doi.org/10.1080/23299460.2020.1785678.
  • Ratto, M. 2011. “Critical Making: Conceptual and Material Studies in Technology and Social Life.” The Information Society 27 (4): 252–260. https://doi.org/10.1080/01972243.2011.583819.
  • Reinsborough, M. 2020. “Art-Science Collaboration in an EPSRC/BBSRC-Funded Synthetic Biology UK Research Centre.” NanoEthics 14 (1): 93–111. https://doi.org/10.1007/s11569-020-00367-3.
  • Rességuier, A., and R. Rodrigues. 2020. “AI Ethics Should Not Remain Toothless! A Call to Bring Back the Teeth of Ethics.” Big Data & Society 7 (2): 205395172094254. https://doi.org/10.1177/2053951720942541.
  • Rip, A., T. J. Misa, and J. Schot. 1995. Managing Technology in Society: The Approach of Constructive Technology Assessment. London: Pinter.
  • Rosenblat, A., and L. Stark. 2016. “Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers.” International Journal of Communication, 3758–3785.
  • Ryan, M., J. Antoniou, L. Brooks, T. Jiya, K. Macnish, and B. Stahl. 2021. “Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality.” Science and Engineering Ethics 27 (2): 16. https://doi.org/10.1007/s11948-021-00293-x.
  • Schot, J., and A. Rip. 1997. “The Past and Future of Constructive Technology Assessment.” Technological Forecasting and Social Change 54 (2-3): 251–268. https://doi.org/10.1016/S0040-1625(96)00180-1.
  • Schuijff, M., and A. M. Dijkstra. 2020. “Practices of Responsible Research and Innovation: A Review.” Science and Engineering Ethics 26 (2): 533–574. https://doi.org/10.1007/s11948-019-00167-3.
  • Selbst, A. D., D. boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi. 2019. “Fairness and Abstraction in Sociotechnical Systems.” In Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. https://doi.org/10.1145/3287560.3287598
  • Sloane, M., and E. Moss. 2019. “AI’s Social Sciences Deficit.” Nature Machine Intelligence 1 (8): 330–331. https://doi.org/10.1038/s42256-019-0084-6.
  • Smolka, M., E. Fisher, and A. Hausstein. 2021. “From Affect to Action: Choices in Attending to Disconcertment in Interdisciplinary Collaborations.” Science, Technology, & Human Values 46 (5): 1076–1103. https://doi.org/10.1177/0162243920974088.
  • Stark, L., D. Greene, and A. L. Hoffmann. 2021. “Critical Perspectives on Governance Mechanisms for AI/ML Systems.” In The Cultural Life of Machine Learning, edited by J. Roberge and M. Castelle, 257–280. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-56286-1_9.
  • Stilgoe, J., R. Owen, and P. Macnaghten. 2013. “Developing a Framework for Responsible Innovation.” Research Policy 42 (9): 1568–1580. https://doi.org/10.1016/j.respol.2013.05.008.
  • Wiklund, M. 2013. “At the Interstices of Disciplines: Early Career Researchers and Research Collaborations Across Boundaries.” In The Social Politics of Research Collaboration. New York: Routledge.
  • Williams, A., M. Miceli, and T. Gebru. 2022. “The Exploited Labor Behind Artificial Intelligence.” Noema, October 13. https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence.
  • Wu, S. S. 2020. “Autonomous Vehicles, Trolley Problems, and the Law.” Ethics and Information Technology 22 (1): 1–13. https://doi.org/10.1007/s10676-019-09506-1.
  • York, E. 2018. “Doing STS in STEM Spaces: Experiments in Critical Participation.” Engineering Studies 10 (1): 66–84. https://doi.org/10.1080/19378629.2018.1447576.
  • Zuboff, S. 2019. The Age of Surveillance Capitalism: The Fight for the Future at the New Frontier of Power. London: Profile Books.