688
Views
0
CrossRef citations to date
0
Altmetric
Symposium: The Ethics of Border Controls in a Digital Age

Data feminism and border ethics: power, invisibility and indeterminacy

Pages 323-334 | Received 15 Sep 2023, Accepted 26 Oct 2023, Published online: 22 Nov 2023

ABSTRACT

Human activities are being increasingly regulated by means of technologies. Smart borders regulating human movement are no exception. I argue that the process of digitization – including through AI, Big Data and algorithmic processing – falls short of respecting (fundamental) rights to the extent to which it ignores what I term to be the problem of indeterminacy. While adopting a data feminist approach in this paper, assuming that data is the ‘new oil’, that is power, I begin theorizing indeterminacy from the imminent risks of datafication as a new instrument of oppression perpetuating injustice and widening inequality gaps. I conclude that technologies regulating human activities must stand ethical scrutiny, especially if they can and do result in (human) rights violations. Unlike the oil being extracted from the ground, data is de facto extracted from people endowed with agency, autonomy, rights and contexts – all which ought to be respected and protected.

Introduction

Human movement today takes place in the context of an unprecedented level of digitization. Digitization of borders, as of everything else, relies on the use of big data and algorithmic processes. People moving around the world encounter digital borders, also termed ‘smart borders’ or ‘e-borders’ (European Commission Citation2023 A). Smart borders are changing the nature of borders, how they operate and how they impact lives (Harari Citation2019; Schneider Citation2015; Shachar Citation2020). I am concerned in this paper with one way in which smart borders shape the relationship between the parties participating in border crossings. Specifically, I examine one problem exacerbating power differentials that takes place in all instances in which data processes influence people’s trajectories, whereby e-subjects are placed in a context of indeterminacy: a context in which it is not clear who does what to/for whom. Ethical inquiries, concerned normatively with who owes what to whom, must be able to grasp fundamental information about who are the subjects at stake and how they relate to one another in the digitized context. I conceptualize how indeterminacy works by means of examples showing the substantive problem of new morally problematic power differentials (D'Ignazio and Klein Citation2020), and procedurally, the problem of obfuscating ethical inquiry.

While advocating for a new framework from which ethical questions may be raised about technologies deployed to regulate human activities, such as crossing borders, I discuss in Section 1 e-subjects, digitization and power differentials. In Section 2, I draw attention to the wider context of digitization and the problem of indeterminacy, and in Section 3, I turn my attention to smart borders governing movement. I conclude by claiming that digitization must refer to an ethical framework, one which identifies indeterminacy as the first ethical problem to be solved, from which relevant ethical inquiries arise about specific technologies.

E-subjects, digitization, power differentials

Digitization, a part of the ongoing technological revolution (Harari Citation2019; Schneider Citation2015), makes people’s daily lives traceable and accessible to an unprecedented extent. With each click or online use of tech, our data is collected and used by third parties for a variety of purposes, including studying human activities (e.g. by scholars working for universities and research centers who can scrape data, among others from social media to perform, say, sentiment analysis); for surveillance (e.g. by states, or private corporations contracted by states to collect data); and for selling (e.g. private corporations selling services and products based on their very accurate knowledge of consumers’ preferences and needs). The integration of digital technologies into the processes of daily life has resulted in social interactions that continually generate digitally stored and transmitted data. The term ‘big data’ itself refers to unprecedented levels of relational information collectable from private individuals and networks, anywhere in the world. Quantitatively, each online click, such as sending an e-mail, or messaging on private or public social media, leaves a trace or a digital footprint, which allows those who have the ability, knowledge and power to harvest and analyze data, to use this information. Digital footprints include websites one visits, emails one sends and information one submits online. Digital footprints can be used to track a person's online activities and devices. The term ‘big data’ indicates large, fast and complex types of data and datasets, and the processing ability to manage datasets in sizes as big as trillions of records from millions of people. Big data are often defined as ‘datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze’ (Manyika et al. Citation2011).

Qualitatively, individual online clicks, such as posting a photo on social media, can leave extremely personal information traceable and accessible, such as about a person’s sexual preferences, one’s precise geo-location at a given hour in a day, the meal one eats at lunch, the movie one watches in the evening and the person one dates. In a very real sense, then, we can say that we have become ‘e-subjects’: subjects who make ubiquitous and routine use of digital platforms, leaving digital footprints. The data that we produce is converted into financial gains by the corporations running these digital platforms.

There are signs that some states and regional organizations recognize some of the problems with this rapidly expanding digital environment and the potential of a handful of actors monetizing the most intimate parts of our lives. European legislation regulating digital services is being implemented – the Digital and Market Acts – aimed at guaranteeing the safe user experience of these digital platforms. The legislation mostly affects American corporations operating within the EU jurisdiction (European Commission n.d. Citation2023B), including with fines if they implement their services without regard to specific treatment of users’ data. However, Data Feminism, takes seriously intersectional approaches to data, aptly terming data as ‘the new oil’, and proposing that any use of data should begin from the simple notion that data is power. Who has power and who does not (D'Ignazio and Klein Citation2020) starts the conversation, as power differentials structure all social and political relationships and help to maintain or perpetuate significant degrees of inequality. Feminism that is intersectional not only examines gender, but also race, ability, sexuality, immigrant status, etc., at structural and individual levels, as well as at specific instances of injustice. In recognizing that the transformation of human experience into data reduces complexity and contexts, any data analysis must begin from the imminent risks of datafication as a new instrument of oppression perpetuating injustice and widening inequality gaps. This approach is adopted in this paper.

What is the relationship between general users and profit-making corporations in the production process of multi-billion-dollar business models capitalizing on any click on Google search and the like? Since users’ data content yields corporate profit, should users then be termed ‘content creators’, ‘producers’ or ‘owners’ of their own content by default? What should be a good just distribution of wealth for digitally produced revenues? I contend that whichever way we attempt to reply to such questions, or raise questions in the first place, we need more understanding of how the problem of indeterminacy works in digitization. When it comes to the use of data, for the purpose of supporting policy, hence for taking political, economic and social action against or for individuals or communities, it becomes salient to know who collects the data, as collectors will likely have goals, interests, political colors and biases (gendered and otherwise); how they will collect it will make some e-subjects identities and causes over-represented or under-represented. Finally, whose data are collected means that e-subjects and their causes may be overvisibilized or invisibilized, singled out or reduced to inexistence (D'Ignazio and Klein Citation2020).

Big data, AI and indeterminacy: who uses the available data, how and for which purpose?

I now turn to the procedural aspect of indeterminacy: the main ethical problem is that identifying the relationship between users, whereby moral responsibility, rights and duties of relevant parties can be assigned, are obfuscated by the very nature of the digitization process. The problem of indeterminacy begins if raising plausible ethical inquiries is not possible, in that it is not transparent who uses the data, how and for which purpose (it is used and shared with others), at the benefit or detriment of whom. The digitization of borders, and just as much as the digitization of other human activities, blurs the relationships between people to the point that it is not clear who owes what to whom, leading to ‘moral chaos’, in which it is possible to exacerbate power differentials or outright abuse, such as discrimination, hypervisibility or even invisibility, of groups or causes via the use of data put for specific purposes.

The cases tackled below show that the assessment of data usage is often only possible partially and ex post – when data is already collected – instead of ex ante, and hence we find ourselves in a regime of data indeterminacy, which I problematize throughout. The recent scandal of the private company Cambridge Analytica, for example, has clearly shown the danger of its private economic interest intertwining with public political interests, such as to shape the US national election outcomes – by using millions of Facebook users’ personal information (New York Times Citation2018), or targeting specific groups of people with information or disinformation. Furthermore, Edward Snowden’s leak of US government’s documents, counted in his autobiography, the bestseller Permanent Record (2020), showing the US government’s surveillance of private citizens, sparked global conversations not merely about the US government conducting surveillance programs (in cooperation with some European governments) on domestic populations, but also a debate about the moral balance between data protection (especially regarding private or personal data) and national security concerns. How far should governments go with harvesting, storing and using their own citizens’ digital personal information and, of course, non-citizens within or outside the US or other governments’ territory as well? More specifically, the Snowden affair not only raises ethical concerns about the large-scale monitoring of the US domestic population’s phone records and internet traffic intelligence in real time but also about the ever-tighter relationship between private and public parties, who jointly produce expanded and intensified surveillance programs. There are at least three significant actors in the data collection process, government agencies (the Snowden case highlights Canadian, American and British), private corporations (Apple, Facebook, Google, Microsoft, Skype, Yahoo and YouTube) who release personal data to the former without informing the public, and ordinary users whose data are at stake (Snowden Citation2020).

The source of data used in surveillance contexts fall under three categories: directed, automated and volunteered (Kitchin Citation2014; Lyon Citation2014). Directed data is when a human being, such as a border guard obtains one’s passport upon asking the traveler to identify themselves; automated data is when data are gathered without a human operator intervening, such as bank transactions or phone records routinely recorded as they occur; and finally data that is volunteered in a weak sense, such as when users give out information to social media websites, but without knowing that they are volunteering their own information to specific third parties, for specific purposes (Trottier Citation2012). Two distinct issues about the harvesting and analysis of data emerge.

First, whether any of us has full information about who collects our data and for which purpose(s), and with which third parties are (if ever) our data shared. Second, and relatedly, there is the issue of datasets. These have often been designed with specific functions and for specific purposes, and the sharing of such datasets with different actors for different purposes stands in need of strong justification, especially when data is purposefully not collected at all for political will.

Consider the management of the asylum system in Greece in the context of the recent so-called ‘migration crisis’. Some asylum seekers in Greece were required to pre-register with the asylum system using Skype to access protection. According to Damianos (Citation2023), out of almost 9000 calls made by asylum seekers during his fieldwork observations, only nine were answered. The 0.1% response rate meant that almost all rights claimants were unable to make themselves known to the Greek state and to obtain protection, as they could not pass to the second stage of biometric identification, which takes place in the office but only with an appointment arranged via Skype. These asylum seekers thus faced a categorical digital barrier to basic protection in Greece, such as to access hospitals, regular employment and not being arbitrarily detained. While it is evident that digitization in this case widens the gap between the state responsibility and those who have legitimate claims, the racial bias of this digital border becomes even more visible in the most recent context of displacement. Ukrainian refugees, unlike those from Sierra Leone, Afghanistan, Congo, Pakistan, Syria, Russia, Gambia and Guinea, Mali, Togo, Cameroon and Iran, could instead avoid the unresponsive Skype platform: they could not only pre-register in person but also do so on an expedited pathway designed ad hoc for them (Damianos Citation2023).

When populations are left rightless, either because they are physically stranded in long lines in front of offices, or stuck in front of unresponsive digital platforms, questions about data determinacy are even more forceful. This is especially so when it comes to those whose data is not collected at all, but purposefully invisibilized, and not shared with jointly responsible parties. Indeterminacy prevails as asylum seekers, digitally queuing for a 0.1% chance to talk to a human being via Skype, do not have any sense about whether they have a place in line, or whether there even is a line they are standing in, as they attempt to register their data (Damianos Citation2023).

As the physical border dissolved in a flow of information, and the subsequent existence of people’s claims is reduced to their data process, the face-to-face relationship between people is being substituted with (unresponsive) digital platforms. Data (and its subsequent processing) becomes easily avoided, as well as easily over collected. In a strict sense, missing data correspond to the rights that have been violated ‘off the record’. These uniquely digital rights violations show both the power cloaked in digital disguise and the diffuse structure of multi-party accountability – from corporations, such as Microsoft and X (formerly known as Twitter), to humanitarian organizations, and state institutions, cooperating on building digital borders resulting in distinctively digital types of rights violations. I term this data indeterminacy.

To further explore this problem of digitization and data indeterminacy, I present two cases in which the use of big data and AI algorithmic processes lead the way toward morally questionable substantive outcomes pertaining to e-subjects: ‘the British travelers’ and the ‘Dutch SyRI Algorithm’. I will set the substantive outcomes aside, in both cases, to concentrate on framing the ethical inquiry: Who uses the data and for what purpose, against/for whom? This is the bare minimum that needs to be obvious and explicit for any ethical inquiry to be carried out. I show that the pattern of digitization that holds true in all the cases examined in this section is one in which individuals acting on behalf of state institutions avoid coming directly ‘face to face’ with right claimants. This is rendered possible by increasing reliance on technological solutions, shunting interactions with migrants toward platforms such as Skype, which could then be ignored, or a ‘black box’ like algorithm like SyRI, or the unknown process of data collection of surveillance at the US border. Yet, these opaque technological solutions result in curtailment of rights.

To turn to the first example. In 2012, two British students were denied entry to the US as tourists, as one of them tweeted a joke quoting the famous show Family Guy about ‘digging up Marilyn Monroe’ and ‘destroying America’, reportedly, meaning to state their intention to have fun, and not as a threat to anyone. As the tweet ended up in the US authorities’ hands, or rather in the AI-driven surveillance border control, it resulted in the students being interrogated for 5 h, detained for 12 h, and then denied entry. As both suffered their freedoms restriction, reportedly, the tweeting student closed public access to their Twitter account upon return (Ajana Citation2015; BBC News Citation2012; Hollywood Reporter Citation2012).

Now, suppose that the British students wish to receive reparations for moral and material damage. It is not clear against whom their claim should be filed, as they were singled out by an AI, analyzing Twitter without their knowledge. The AI was likely trained to ‘scrape’ users’ data for keywords. Likely, the US border guards were trained to take actions when AI calls for it, hence they conducted a search in the students’ luggage for shovels, which were not found (Ajana Citation2015; BBC News Citation2012; Hollywood Reporter Citation2012). Yet, students’ above-mentioned rights were restricted.

Suppose, alternatively that the students instead of being surveilled without knowledge were asked to sign a form given to them (as to all passengers) flying into the US, in which they declare that they have no malicious intentions while in the US. Such questions would have come directly from American authorities, addressing them as travelers and using language that was clear and written with the utmost intention of questions not being equivocal and interpretable. The formal moral relationship between the US border guards and any traveler to whom the US owes responsibility and as a guarantor of rights appears substituted or mediated by an opaque and indirect AI-driven technology. The US authorities have entrusted AI, through capturing of keywords, with the responsibility of suggesting to the human guards a specific course of action. This produces doubt about the wisdom of taking cues from AI, and certainly reduces the transparency of defensible action on the part of the border guards. Who else should be found responsible for the AI’s ignorance of irony or slang, or any AI’s potential ‘hallucination’ or when AI results are not understandable and immediately discernable by humans? Is it the team of mathematicians and data scientists who designed, trained or supervised the AI to scrape data from Twitter and to build and assemble datasets? Is it Twitter, for permitting access to users’ data without openly and directly informing the given subjects that their data is being processed by contractors who work for the US borders for the purpose of security? Are Twitter or the US border authorities responsible to inform travelers how much data it is given permission to be used, by whom, to whom and for which purposes and with which methods? Does this data have a time usage and storage of it, or is it permanent record? How is it used next time? Was the AI ‘right’ or ‘wrong’ if it was programed to find keywords, which it did? Was the human border guard and investigative team right or wrong in acting commensurately to the situation? These ethical inquiries are difficult to set in the first place because we do not have the information of who does what to whom ex ante, hence we cannot ask ex ante who owes what to whom. Hence, what is the moral problem, if any?

As greater reliance is placed on automated methods using big data (Gandy Citation1993), computers are not only used for compiling ‘No Fly Lists’ and enforcing border security policies – by profiling potential terrorists, resulting in denial or postponement of travel plans, regardless of whether the information targeting them is sound (Zavrsnik, Citation2019). But their use can also result in the impairment of other liberty and welfare rights. This type of automation, or algorithmically driven findings, precludes or limits the understanding of the public about how they end up singled out. As Saunders highlights in this symposium, we cannot assume that data used to inform border control decision-making is only data that is readily identifiable to us as ‘relevant’ to our migration plans. We often have no choice not to produce data about ourselves in all of our daily activities, and this data could also feed into border control decision-making. And yet, these other data-gathering situations are no more immune to the problem of indeterminacy.

Consider a second example, the Dutch government’s recent data scandal. Activists stated that an algorithm, known as SyRI, which was used by some Dutch governmental bodies to detect welfare fraud, was repressive (Algorithm Watch Citation2023; Misuraca and Van Noordt Citation2020; Rachovitsa and Niclas Citation2022). Citizens would be flagged by this algorithm, and a governmental agency would assess whether their case requires scrutiny, or if it was a ‘false positive’, after which, citizens are contacted to address the situation, without knowing what kind of data has been used to build the accusation against them. Citizens do not know the reasons they suffer consequences, such as loss of their benefits, nor that they are targets of investigation. Dutch civil rights organizations filed a case against the Dutch state to stop the use of SyRI and, in 2018, they won their case. Among their arguments, in addition to the fact that these complex algorithms work as ‘black boxes’, in that they blindside citizens with decisions based on content they are not aware of, these decisions are further characterized by human error, problematic assumptions and biases. The AI system in question does not merely cross-reference data of domestic e-subjects about work, fines, penalties, taxes, properties, housing, education, retirement, debts, benefits, allowances, subsidies, permits and exemptions, etc. But it remained largely indeterminate how the algorithm ‘reasons’ and how it processed data to reach specific outcomes. Nevertheless, reportedly, data showing consumption levels of basic vital services was considered. For instance, low water consumption raised questions about whether specific persons truly live at specific addresses and whether they qualify for a specific benefit. People who already live on benefits may live frugally or have a broken meter. It is morally troublesome, as with the case of the British students, that data is extrapolated from one context to another. As we know, the harvesting of much data gathered from users is justified by improving users’ needs and experiences, but users were not informed that they may lose their benefits from their state, due to their data being shared with specific agencies that target them.

Additionally, it is morally questionable that this secretive algorithm was used in low-income neighborhoods, dense with migrant populations, adding to the worry that surveillance targets the most vulnerable within society, just as much as at its borders, without e-subjects knowing that investigations were carried out against them, how or why, until late in the process, when they are called to defend themselves. As Maranke Wieringa, investigating the algorithmic accountability in the Dutch case claims: ‘One goal of municipalities using SyRI for specific neighborhoods is to improve their living standards. However, SyRI is not designed for that purpose … you can question whether you should depart from the same repressive position for both goals’ (Algorithm Watch Citation2023).

The moral link between the parties, say state (with its direct responsibility and rights) and its citizens (and their responsibility and rights) generally assumed by direct face to face contact throughout processes is replaced or blurred by technological solutions. This places e-subjects in a new form of vulnerability, from which it is not clear who does what to whom ex ante, and how to claim one’s rights ex post in this context (Kerr and Earle Citation2013). Indeterminacy at this level is an intrinsic moral wrong because it becomes unclear how to set moral questions such as, ‘Who owes what to whom?’ – a condition which would for instance neither permit any type of social or legal contract, nor any ethical inquiry. All these instances require that it is determined who the parties in a relationship are, with full information about their status and function, the conditions under which, two or multi parties undertake obligations to one another, both procedurally and in substantively. In the case of SyRI, the technology had been deployed before its legal basis was introduced with the SyRI legislation in 2014 (Misuraca and Van Noordt Citation2020; Rachovitsa and Niclas Citation2022).

The Hague Court ordered the immediate halt of SyRI algorithm for not guaranteeing sufficiently transparent and verifiable legislation and safeguards for intrusions in the private life of subjects. Among others, SyRI violated with opaque technological means the Article 8 of the European Convention on Human Rights (ECHR), which protects the right to private life, without providing a tenable special justification (CitationHague District Court Judgement).

The European Services and Market Acts presently target giant tech companies, mostly US-based, with tighter regulations on the use of their products. This highlights yet one more important aspect of indeterminacy. While this legislation may regulate how giant tech companies affect the life of the citizens and residents of Europe, it leaves American (and other) citizens and residents elsewhere, significantly more vulnerable than their European counterparts. Arguably, safeguards are even weaker for migrants who are in interstate situations, at borders or far outside of them (European Commission Citation2023A, Citation2023B, Citation2023C, Citation2023D). Even if new legislation sets much stronger safeguards, including against state institutions, the question of whether they are sufficient still stands – in an extremely developed and rapidly developing technological landscape, marking yet another aspect of indeterminacy, that is the perpetuating ‘time gap’ between slow bureaucracies placing safeguards ex post and extremely fast technological development operating ex ante, in regimes in which there is absence or insufficient law safeguards.

Smart borders and indeterminacy

Smart borders fulfill their primary function of border control, while they are simultaneously less visible, if visible at all, because they are de-territorialized, externalized (Sager Citation2018, Citation2022) or augmented (Ajana Citation2015). Certainly, contemporary borders are no longer some physical lines to be crossed like Joseph Carens put it, patrolled by border guards armed with guns (Carens Citation1987). Borders are flows of information, of big data, attributed to specifically identified individuals at any point in time and space, whose identity becomes a ‘data double’ or a digital identity. Personal information, not limited to biometric data, sits in a variety of registries, clouds, databases, in various territories and under the authority of a variety of jurisdictions; yet it is accessible, and consultable by often unknown others and for purposes unknown to the people who are the subjects of the data. The sharing and use of this data threatens to substantively affect people lives in ways that are not clear, transparent and evident. Further, they weaken the procedures by which people act as right claimants. As a result of these shifting borders (Shachar Citation2020), I tackled the additional issue of indeterminacy: how it differentially affect border crossings, besides the traditional divide between citizens and migrants – the latter’s data is targeted and often over-visibilized for the purpose of tightly regulating borders or data craving simpliciter (Lemberg-Pedersen and Haioty Citation2020). Digitization or datafication of borders changes their nature by increasing the level of control via the effective building of an electronic fortress (Unmüßig and Keller Citation2012; Zavrsnik Citation2019), as all articles of this symposium suggest. Technology, however, is not just the ‘means’ that allows political and administrative aims to be carried out. Some scholars claim correctly that technology creates its own possibilities and limitations, (Citron Citation2008; Dijstelbloem, Meijer, and Michael Citation2011; Everuss Citation2021).

Migration policy increasingly consists less of simply laws and more of ‘high-tech solutions’, and this shift has resulted in massive windfalls for tech and defense companies. The estimated cost the European Commission pays for the Smart Borders Package to tech companies is 400 million euro and 190 million for yearly maintenance. However, researchers examining previous technologies’ costs, estimate these costs to be fivefold, and around 2 billion euro. Whether such exorbitant costs are politically justified to deter irregular migration, the undoubtful beneficiaries are the contractors, building these IT systems (Hayes and Vermeulen Citation2012). Among some of these IT systems erecting digital borders or walls in Europe, are databases such as EURDAC, SIS II, VIS and multiple ones still developing (Boffey Citation2018), toward building a framework of interoperability with biometric data, such as DNA profiles, finger and palm prints, face scans, in addition to ‘lie detectors’ and other experimental technologies developed with the use of the collected data. While much migration scholarship analyzes the development of these technologies from the perspective of how the nature of borders changes, the border location, its efficiency, instancy and predictive power, portability and digitization, I attempted nevertheless to capture yet another feature: the power differential that is digitally created between those who exercise power and those who are the recipients of it. Power inequality is especially morally troublesome, to the extent to which it is generated within and further exacerbates a climate of indeterminacy, made possible by digitization. That is, people’s data can be checked, not merely at border crossing for the purpose of crossing the border, but at anytime and anywhere for virtually any reason (including erroneous ones) in the EU (and elsewhere) – without e-subjects timely knowing by whom, when and for which purpose.

Conclusions

Technological solutions such as the SyRI algorithm or Skype’s unresponsive digital platform lead to independent outcomes impacting communities. Such outcomes may not be in line with (the scope of) the laws within which they presumably operate, but also outright violate rights that laws presumably protect. It is effectively impossible for anyone to not leave digital footprints – that is, to generate data that are (often automatically) stored and can be used, by multiple actors who share it with others. Digital footprints can indicate someone’s exact location in time, who they date, if one is pregnant or ill, one’s bank transactions, water consumption, searches in Google, to whom and what one writes via email, etc. Having no control over one’s digital footprints, what is the level of control that anyone ought to be guaranteed against the use of any of their data by any competing party? This paper contends that data collection should not be carried out in a regime of indeterminacy in which it is unclear and non-traceable who uses which data, for which purpose, leading to which specific traceable and transparent outcomes, affecting which specific individuals and communities. While I have not addressed any of the ways in which digitization and technological advancement is or can be beneficial, I attempted to warn against one main problem, manifesting substantively in generating via digital means, new power differentials. The core point is that power differentials are morally troublesome to the extent to which they are generated and exacerbated in a climate of indeterminacy. The problem of indeterminacy arises, in a nutshell, when it is not clear who does what to whom, and it makes it difficult to discern ethical questions, such as who owes what to whom about specific technologies – from their very creation to their application. Ethical reasoning presumes that we know who the subjects in question are, how they are related to one another and how they operate. Without such basic transparent information, it remains indeterminate how to sort out what is already largely indeterminate, leading to a paradox of indeterminacy, detrimental to rights regimes. Data Feminism eloquently begins the study of data with the critical analysis of who holds the power and who does not, and when it comes to data, seen as the new oil, assesses who collects and gains from data, for which purpose, how and who is affected by which interests. This paper adds to this approach, already justified by an intersectional method to data, highlighting strength and limitation of techniques used for data collection, with one unifying normative question:

Who owes what to whom in the process of digitization and algorithmic data regulating borders or other human activities?

Technologies regulating human activities must stand ethical scrutiny, especially if they can and do result in (human) rights violations. Unlike the oil being extracted from the ground, data is de facto extracted from people endowed with agency, autonomy, rights and contexts – all which ought to be respected and protected.

Acknowledgements

I thank Serena Olsaretti for comments and ample mentorship; Silvia Fierascu for organizing the ‘Women in Data Science’ event, where the first ideas of this paper were presented and commented; Natasha Saunders and Alex Sager, for organizing ‘The Ethics of Border Controls in a Digital Age’ event, where the paper was presented, as well as for acting as generous guest editors; all participants of the events for their comments; not least, journal co-editors, Eric Palmer and Vandra Harris Agisilaou for written comments on the final draft.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the European Commission MSCA [grant number 101026134].

Notes on contributors

Georgiana Turculet

Georgiana’s Turculet EU-funded MSCA project JUSMOVE is hosted by the Law Department University of Pompeu Fabra (UPF) in Barcelona and the Big Data Science Laboratory at the West University of Timisoara (WUT). Georgiana's interdisciplinary research, combining rigorous methodology and tools from Ethics and Philosophy with Data Science, investigates the movement of people worldwide. It aims at impacting scholarly and public contemporary debates, as well as stakeholders, such as United Nations agencies and the European Union. She holds her PhD from Central European University (CEU).

References