79
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Rule-based versus AI-driven benefits allocation: GDPR and AIA legal implications and challenges for automation in public social security administration

ABSTRACT

This article focuses on the legal implications of the growing reliance on automated systems in public administrations, using the example of social security benefits administration. It specifically addresses the deployment of automated systems for decisions on benefits eligibility within the frameworks of the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA). It compares how these two legal frameworks, each targeting different regulatory objects (personal data versus AI systems) and employing different protective measures, apply for two common system types: rule-based systems utilised for making fully automated decisions on eligibility, and machine learning AI systems utilised for assisting case administrators in their decision-making. It concludes on the combined impact that the GDPR and the AIA will have on each of these types of systems, as well as on differences in how these instruments determines the basic legality of utilising such systems within social security administration.

1. Introduction

There are today plenty of examples that public administrations can rely heavily on automated systems to make or support legal decision-making even where there is risk that vulnerable groups in society are affected, and where malfunctions or inbuilt biases in automated systems have had detrimental effects at scale.

One sector which in many countries has become a focal area for various automation efforts is social security administration. Done ‘right’, the administration of social security benefits obviously advantages from fast and expedient automation (serving both fiscal and individual interests). However, within this sector, several instances demonstrate that the deployment and dependence on automated systems for administering or supporting social benefits have affected the legality and fair distribution of benefits, sparking a crisis in legitimacy. To name some of the most well-known examples, one is found in the Australian so-called Robodebt scheme, which refers to a government system authorised to automatically issue notices to welfare recipients identified as having debts through a process of income averaging. As the system, however, was hefted with errors of miscalculation, it led to the automated issuing of faulty debt claims at scale. Before the Robodebt recovery scheme was finally scrapped in 2020, it was found to have issued around 433 000 faulty notices. These findings, amongst other, lead to a class action lawsuit and a 1.2 billion AUD settlement.Footnote1 Another example is the so-called SyRI system, which was a fraud detection system used by the Dutch state to process considerable amounts of public data to identify those individuals most likely to commit benefits fraud. After a group of nongovernmental organisations jointly sued the state over SyRI’s incompliance with privacy rights, the Dutch general court found the authorising legislation to be unlawful as it conflicted with the Article 8 right to privacy under the European Convention of Human Rights, ECHR.Footnote2 A similar example, also in a Dutch setting, is the so-called child benefits scandal (Toeslagenaffaire). Here approximately 26,000 parents, between 2005 and 2019, were by automated means wrongly accused of fraudulent benefit claims, leading to significant repayments that often amounted to substantial sums and placed families in severe financial distress. This scheme has been characterised by investigators as building on a ‘discriminatory’ and institutionally biased working procedure, and ultimately resulted in the resignation of the government in January 2021 due to violations of fundamental principles of the rule of law revealed during a parliamentary inquiry.Footnote3

These examples show that malfunctions or inbuilt biases that automated systems may carry into public decision-making and benefits allocation can have detrimental effects at scale. While the interest among legal scholars in recent years have gained increasingly strong traction on different themes relating to the legal conditions for, or effects of, automated practices in public administration – there is still a lack of research that examines more closely the legal conditions for utilising technologies in social security benefits administration. Such a broad framing of the subject matter naturally includes various legal considerations, which can manifest in different ways. My assertion is therefore limited in scope. The focus of the analysis will be European law and what factors in the General Data Protection Regulation,Footnote4 the GDPR, and the upcoming Artificial Intelligence Act,Footnote5 the AIA, that activate the applicability of obligations for public social security authorities to ensure the lawful deployment of automated systems that are utilised in benefit administration.Footnote6

The GDPR and the AIA are the two key regulatory frameworks at EU-level which intersect with the deployment of automated systems in public case administration. While they do share some of their regulatory aims relating to the protection of human and fundamental rights and have points of intersection in the realms of data usage, they are primarily centred around different regulatory objects. As the primary regulatory object/s of the GDPR are personal data – the regulation also primarily establishes conditions for acts of processing such data.Footnote7 As the primary regulatory object/s of the AIA are AI systems – this regulation, instead, primarily establishes conditions for the design and use of such systems.Footnote8 These differences, when translated into the specific criteria governing applicability and shaping the extent of obligations set forth by each instrument, lead to variations in the impact that they have (both in isolation as well as combined) on the legal conditions for public social security administrations to utilise technologies in their case administration on benefits. This article will demonstrate these differences through an analysis of their implications for two prevalent types of automated systems that are commonly deployed within social security benefits administration.

As indicated, the article will be structured around two example types of automated systems commonly deployed within social security administrations. Example type A is a so-called rule-based system (meaning that the system operates based on predefined, static rules and criteria) which can make fully automated decisions on benefits eligibility. Example type B is a so-called machine learning AI system (meaning that it is a data-driven model for pattern recognition) that is used to make inferences from the data it processes and guide decision-making administrators on how to decide on eligibility. The basic legal conditions for deploying each of these system types, as laid down through the GDPR and the AIA, will be analysed in the following sections in consecutive order. I will then, lastly, turn to drawing some conclusions on how the differences in their regulatory approaches impact the basic legality of A and B type systems respectively.Footnote9

2. System type A – ‘rule-based’ systems used to make fully automated decisions on benefits eligibility

Systems used for making fully automated decisions on benefits eligibility are typically of so-called rule-based type, meaning that they operate based on predefined, static rules and criteria which have been coded by humans.Footnote10 For such systems to be able to produce lawful decisions, one key issue is the ensuring that the eligibility criteria as well as applicable procedural requirements are translated into code in such a way that they generate full correspondence with the law. A defining characteristic is that systems of type A, due to their static properties, remain strictly confined to their prescribed instructions, and that this extends to both the types of data they handle and the deductions they make from this data. A practical example of a system of type A could be found in the Swedish social security setting and the administration of parental benefits, where the utilisation of a rule-based type system allows the Swedish Social Insurance Agency to issue around 65–71 percent of the cases fully automatically.Footnote11 This system is programmed to automatically and consecutively check each benefit criteria against a defined set of case evidence to draw conclusions on eligibility and effectuate a decision based on that conclusion, as well as issue a decision and notice to the claimant.

2.1. Ramifications of Article 22 GDPR for type A systems

When turning to the regulatory frameworks that pertain to systems of type A, a suitable starting point is that their use triggers the application of Article 22 GDPR. This article governs cases where personal data, such as the case evidence in a benefits claim, are processed to make solely automated decisions which produces legal effects concerning the data subject, by imposing certain conditions on its use.Footnote12 There is much discussion around the meaning of Article 22, where one of the debated questions has been whether solely automated decisions are to be interpreted as being prohibited, or whether the article rather regulates a right not to be subject to such decisions which must be invoked by the data subject him- or herself. That the prior interpretation is the valid one is clear since the Court of Justice of the European Union, CJEU, in December 2023 gave its ruling in the C-634/21, OQ versus Land Hessen case.Footnote13 However, even though the CJEU has finally answered this question, and as is relevant for this article, it should be noted that the article does not impose a blanket ban on automated decision-making. The article, namely, contains opening clauses which allow for such decision-making under specified circumstances. For public authorities, such as most social security administrations, the relevant derogation is found in Article 22(2)(b) GDPR which establishes that solely automated decisions may be made if (a) it is authorised by Union or Member State law and (b) the referenced regulations in (a) also lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests.Footnote14 Additionally, if the data qualifies as special category data under Article 9 of the GDPR (that is, data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, genetic or biometric data, data concerning health or a natural person’s sex life or sexual orientation), Article 22 GDPR lays down a qualified prohibition (which thus applies even if the 22(b) derogation applies) against solely automated decision-making based on special category data. However, this additional prohibition also allows for derogations – as relevant in this article if the processing is necessary for reasons of substantial public interest under Article 9(2)(g) GDPR on the basis of Union or Member State law.

The structure of Article 22 is thus rather complicated but contains opening clauses that refer to Union or Member State laws for determining when decisions can be made solely automated. As public social security administrations generally operate based on obligations laid down in law, where the specific eligibility criteria of public social security benefits schemes are also laid down in law, these clauses (from a GDPR perspective) therefore seemingly open the gates rather wide open for fully automated decision-making as long as there is a legal basis. One basic condition is nevertheless that any specific Union or Member State laws invoked under a GDPR opening clause must align with the human and fundamental rights as laid down in the European Convention of Human Rights,Footnote15 ECHR and the EU Charter of Fundamental Rights,Footnote16 CFR.Footnote17 As established in CJEU case law, Member States who exercise options granted by a GDPR opening clause must also use their discretion under the conditions and within the limits laid down by the provisions of that regulation, and must therefore legislate in such a way as not to undermine the content and objectives of that regulation.Footnote18 In the recent C-634/21 OQ versus Land Hessen case, the court stressed in particular that Member States may not adopt legislation under Article 22(2)(b) authorising profiling without respecting the requirements of Articles 5 and 6, as interpreted by the case-law of the CJEU. The same goes for legislation allowing for automated decision-making based on the processing of special category data as under the qualified prohibition in Article 22(4) GDPR.Footnote19 The court thus clearly reinforced that national legislation installed under the Article 22(2)(b) opening clause remains subject to scrutiny under the Article 5 GDPR fundamental principles, such as lawfulness, fairness, transparency, purpose limitation, data minimisation etcetera. It also made clear that national legislations cannot disregard the fact that any processing of personal data must satisfy at least one of the legal bases for processing personal data under Article 6 GDPR.Footnote20

Altogether, these above discussed limitations on the space for manoeuvre offered by GDPR opening clauses such as Article 22(b) GDPR mean, among other things, that proportionality considerations become central to assessing whether national law can provide a basis for GDPR-compliant derogations. A linked question here is also what degree of specificity that a Union or Member State law must have to allow for an Article 22(2)(b) derogation. Here, Recital 45 clarifies that a specific law for each individual processing is not required and that it is for the Union or Member State law to determine the purposes of processing in such cases.Footnote21 This means that a specific statutory power to process personal data is not required, although the underlying task, function or power must have a clear basis in law. The recital also makes clear that purposes of public health, social protection as well as the management of health care services are considered as being in the public interest (thus clarifying the public interest status of social security in a broad sense). The recital does not refer to Article 22, neither in the direct or implied sense. Instead, it addresses opening clauses related to public interests, just like those found in Article 22. It could therefore be inferred that the Recital’s clarifications have bearing for interpreting the opening clauses pertaining to ‘public interests’ in Article 22 GDPR.

Article 22 GDPR thus lays down an obligation to establish a legal basis for any public decision-making that is solely automated, while the language and structure of the article still seems to invite to discussions around its specific interpretation on the generosity of the opening clauses. The same is also true for what safeguards must be in place for the derogations from the prohibition on processing personal data to apply. For instance, Sweden has chosen to incorporate a broad provision into its Administrative Procedures Act that covers most public decision-making.Footnote22 This provision simply states that decisions can be made automatically without specifying further criteria for when such practices are considered lawful. It is worth noting that these automated decisions are subject to general safeguards outlined in the same regulation, which apply regardless of whether the case is handled through automation or manually. This approach has by the national legislator been viewed as meeting the safeguarding requirements of Article 22 of the GDPR. Nevertheless, ongoing debates question whether such a generalised provision can justify an exemption under Article 22(b) of the GDPR, and whether one can be confident that technology-neutral safeguards are adequate to fulfil the safeguarding requirements throughout the various instances of fully automated decision-making that may take place across the public sector.Footnote23 General advocate Priit Pikamäe’s opinion in C-634/21 OQ versus Land Hessen indicates that there, at the very least, must be alignment between the scope ratione materiae of the national regulation and Article 22 GDPR, meaning that regulations designed for all too broad purposes cannot serve as a legal basis for the adoption of a national legislative measure under Article 22(b).Footnote24 In its judgement, the CJEU did not specifically address this aspect of the general advocate’s opinion. However, the court confirmed the spirit of this reasoning through its emphasis of the obligation to make a careful legality assessment, which must be able to identify the personal data processing that the legal basis enables, as well as the security measures linked to that processing, in order to enable an assessment of whether the processing meets the requirements laid down in Articles 5, 6 and 9(a) or (g) (by proxy of Article 22(4)).Footnote25 Since such an assessment is only possible if the national legal basis has a sufficient degree of specificity, the judgement emphasises the precision-aspect of the legality requirement. While this reasoning in itself does not imply a requirement of explicit links between the national legislation and Article 22 GDPR (or the GDPR as a whole), it stresses that legislation intended to allow exceptions to the article’s prohibition of automated decision-making cannot be designed for all too broad purposes.

Future CJEU case law will likely clarify the perimeters of Article 22 in further detail. Its design, however, makes clear that social security administrations utilising systems of type A must clearly delineate the statutory support that they base their automated decision-making on – while that support must not explicitly reference the GDPR. A general regulation authorising automated decision-making cannot therefore alone justify an exception to the prohibition in Article 22(1) GDPR, but the regulation must also (alone or in combination with other regulations) clarify the conditions of the specific personal data processing to be carried out as well as the safeguards provided for. A transition from manual to automated decision-making does, however, not inherently necessitate active legislative action by national lawmakers to address issues of permissibility or specific safeguards. Existing legislation that was not designed with a specific decision-making process in mind may suffice. For public authorities, like social security administrations, operating under Union or Member State law, Article 22 GDPR therefore sets thresholds for the alignment of these regulations rather than providing its own specific criteria.

2.2. GDPR lawfulness of utilising type A systems in benefits administration

Outside of Article 22, the GDPR also applies to the processing of personal data which systems of type A will perform when in use.Footnote26 The basic lawfulness-criterion for data processing is found in Article 5(1)(a) GDPR and can only be met if at least one of the legal bases enumerated in Article 6 GDPR is satisfied. Furthermore, if the data qualifies as special category data (that is, data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, genetic or biometric data, data concerning health or a natural person’s sex life or sexual orientation), one of the legal bases enumerated in Article 9 must, additionally, also be met (as the article prohibits the processing of such data unless an exception applies). Again, the fact that social security administration qualifies as a public interest, means that the lawful bases for processing personal data that are utilised in automated decision-making procedures are found in those provisions of Articles 6 and 9 GDPR that relate to processing of personal data for public interests. In the case of a type A system, the most relevant lawful bases would here be Article 6(1)(e), which allows processing necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, and Article 9(2)(g), which allows for processing of special category data if necessary for reasons of substantial public interest.Footnote27 For both articles, the basis for such processing must be laid down in Union or Member State law (as per Article 6(3) and 9(2)(g)), serving dual purposes for the legality assessment in the GDPR.Footnote28

At the Member State level, the fact that the processing must have a basis in law means that national lawmakers who wish to utilise a GDPR opening clause must in the legislation make explicit the tasks which may necessitate the processing of personal data.Footnote29 As already touched upon above, such regulations must be legitimate, necessary in a democratic society and proportionate according to the standards set by the ECHR and the CFR, as well as align with the content and objectives of the GDPR regulation which contains the opening clause.Footnote30 Through a rather open-ended formulation in Recital 93, stating that laws which delegate tasks to be carried out in the public interest or in the exercise of official authority may deem it necessary to carry out an impact assessment prior to any processing activities are started, the GDPR also encourages situated proportionality considerations at the legislative level to secure such compliance. Against the background that automated decision-making often is justified by an efficiency-rationale, it may also be noted that a lack of resources, according to the CJEU, cannot in any event constitute a legitimate ground justifying interference with the fundamental rights guaranteed by the CFR.Footnote31 Altogether, this limits the discretion for national lawmakers’ utilisation of the opening clauses, while still leaving much room for different legislative approaches to automated decision-making based on personal data.Footnote32

However, it is important to note that compliance with Union or Member State legislation corresponding to one of these opening clauses does not guarantee the lawful processing of personal data. This is because at the applied level, where social security administrations act as data controllers and must determine whether personal data can be processed for specific purposes, such as implementing automated decisions using type A systems, the scope and substance of that legislation also serves as the benchmark to assess whether the specific processing sought by the controller qualifies as necessary (by the standards of Articles 6(1)(e) and 9(2)(g) GDPR).Footnote33 National social security administrations, when assessing the lawfulness of the specific personal data processing activities resulting from the use of a type A system would ideally assess the compliance of that processing both at the legislative and the applied level.Footnote34 However, it is more likely that they typically will focus on the latter assessment – where they must determine whether their data processing is necessary in relation to their tasks laid down in law, where the concept of necessity should be narrowly construed in favour of the data subject, and where derogations and limitations in relation to the protection of personal data only apply in so far as is strictly necessary.Footnote35 The necessity assessment at this stage is also to be conjunct with the ‘data minimisation’ principle of Article 5(1)(c) GDPR, which emphasises the proportionality principle.Footnote36 This means that social security administrations in their roles as controllers must balance these considerations at the detailed level even if there is legislative support at Union or Member State level.

While securing a legal basis for the processing of personal and special category data is fundamental for securing the lawful deployment of an automated decision-making system of type A, the GDPR also imposes several additional obligations that social security administrations must cater to when deploying such systems. All of these cannot be elaborated here, but it is worth highlighting those obligations that may be considered particularly relevant in contexts where personal data are processed by the aid of technologies in public benefits administration. I will here prioritise those facets of the GDPR that extend beyond the scope of data processing alone and consider the broader potential impacts that technologies may introduce to the processing.

Even where a legal basis for processing has been established, the Article 5 GDPR principles relating to the processing of personal data circumscribes the administration’s possible use of such data. Particularly, the principles of fairness and transparency, purpose limitation, and data minimisation play a significant role in addressing issues related to discrimination and bias, data excess, and data overuse. These principles are ‘active’ principles in the sense that they must continuously be considered and met. They may also give rise to specific challenges, especially if the automated system is configured to process more and different types of data than would have been considered in a fully manual process. However, a common characteristic of rule-based systems is that they adhere to strictly predefined rules, especially when making fully automated decisions, ensuring that they are not programmed to consider data beyond what is relevant to the specific eligibility determination. Under the assumption that type A systems are generally not configured to consider excessive amounts of data, an (indeed very general and broad) assertion could be made that type A systems also typically do not cause added tension at least with the principles of data minimisation and purpose limitation. However, there will be reason to return to these principles further on.

The GDPR obligations framework is built around a risk-based approach. Public social security administrations operating type A systems act as controllers under the GDPR and as such, they are mandated to assess the risks associated with specific data processing activities and take appropriate measures to protect individuals’ rights and freedoms.Footnote37 They must therefore implement appropriate technical and organisational measures to ensure a level of security for the data which is appropriate to the risk, Article 32 GDPR. This requirement has been made relative to ‘the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons’, highlighting its dual focus on risk assessment and goal attainment. Taking note of the fact that it includes a responsibility to have in place a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing – the requirement assumes a fairly comprehensive approach where data protection and technological affordances must be conjointly considered.

In their capacity as controllers, social security administrations considering deploying a type A system must typically also perform an impact assessment under Article 35 GDPR. This article calls for such an assessment to be made where a type of processing, in particular using new technologies, is likely to result in a high risk to the rights and freedoms of individuals. The assessment should be made considering the nature, scope, context and purposes of the processing, and the assessment must be made before the system is put into use.Footnote38 Article 35 here, as indicated also in Recital 36, prescribes a two-step assessment process. The initial step requires the controller to determine if the intended use of the system, concerning its personal data-related operations, triggers the application of Article 35. At this stage, the key question is whether the processing carries significant risks. It should therefore be noted that Article 35(3)(a) holds that a data protection impact assessment, DPIA, should in particular be required in cases of a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person. Similarly, the Article 29 Working Party guidelines on DPIA’s (as endorsed by the European Data Protection Board, EDPB) have identified that automated decision-making with legal or similarly significant consequences serves as an indicator that the processing is likely to qualify as high risk.Footnote39 Social security administrations considering deploying type A systems are thus likely obligated to carry out a DPIA. As will be elaborated, however, there are some possible exemptions to this obligation.Footnote40

A DPIA can cover a single data processing operation but can also address multiple similar processing activities with high-level risks. As indicated in Recital 92 GDPR, this might be the case where public authorities or entities aim to create a shared application or processing platform. One DPIA can therefore cover processing activities with common characteristics, aiming to systematically evaluate situations posing significant risks to individual rights and freedoms, rendering it unnecessary in situations using similar technology to collect the same data for identical purposes.Footnote41 Or, in other words, a DPIA may be omitted if the processing closely resembles a previous DPIA. A DPIA is also not required if the national competent supervisory authority has utilised the Article 35(5) option to establish and make public a list of the kind of processing operations for which no data protection impact assessment is required. This exemption, however, applies only if the processing strictly adheres to the specified procedure in the list and continues to meet all GDPR requirements.Footnote42 Another possible exemption from the DPIA-obligation is found in Article 35(10) GDPR which acknowledges that data processing activities which take place based on Union or Member State law might have already been subject to prior impact assessments in the context of the adoption of that legal basis, and that this circumstance might render a DPIA superfluous. As the 35(10) derogation refers to situations where such laws regulate ‘the specific processing operation’, however, this exception applies only in those cases where there is a specific legal basis targeting the processing performed by the type A system (so that the impact assessment performed during the legislative phase may have covered the more specific risks of system use).Footnote43 This means that social security administrations cannot escape performing a DPIA by relying on the assessment made at the legislative level unless there is close alignment with the regulations purpose/s and the specific processing that they are to perform. If the legislation governing type A systems is not explicitly tailored for that particular application, then a DPIA must be performed before implementing such a system.Footnote44 Even though these possible exemptions to the DPIA obligation have a design which is fairly accommodating towards personal data processing in the law regulated public interest sphere, they all build on the presumption that a prior (but transferable) impact assessment has considered the types of risks also associated with the new deployment. One could make the argument that a type A system, while sharing many common risks with other systems used in public decision-making, also presents distinct risks related to the specific benefit it automates. This consideration aligns with the EDPB's viewpoint, emphasising the importance of conducting a DPIA when introducing a new data processing technology. Therefore, the assertion that such an introduction typically necessitates a DPIA remains valid. The EDPB's position is also that when there is uncertainty about whether an obligation to conduct a DPIA applies, it should be carried out as a precaution.Footnote45

The proactive DPIA evaluations are, furthermore, closely linked to the GDPR's Article 25 principle of privacy by default and by design, as this principle centres around the idea that data protection compliance might best be helped if protective strategies are established and integrated in technical or organisational measures.Footnote46 It holds that the controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. This obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. Importantly, this duty is applicable prior to the initiation of processing practices. Consequently, the GDPR imposes specific requirements on systems of type A, emphasising the imperative for privacy-conscious design and the safeguarding of personal data throughout the entire processing lifecycle.

The main objective of the GDPR is to protect personal data as a proxy for protecting primarily the right to data protection, as laid down in Article 8 CFR and as included in Article 8 ECHR. The regulation is therefore not primarily geared towards regulating risks relating to automated systems which utilise personal data to make or support decision-making (in social security administration or elsewhere). In other words, and as eloquently put by Bygrave, the rules of data protection law do not engage directly with the processes involved in creating models, algorithms and other elements of inferential architecture.Footnote47 As seen in the design and objectives of Articles 25, 32 and 35, however, the GDPR requires proactive approaches to data protection. Especially through the principles of purpose limitation and data minimisation, the GDPR also signals a concern to ensure that data controllers duly reflect over the nature of the problems/tasks for which they process data, and over the quality (relevance, validity etcetera) of the data they process to address those problems/tasks.Footnote48 While this protection does not have collective interests as the guiding principle, they can contribute to a more careful selection and management process regarding personal data – which may impact the GDPR legality assessment of a system of type A. Essentially, the GDPR thus implies a consideration of collective risks stemming from automated processing procedures. Proactive protection methods demand a more inclusive, anticipatory approach to risk management, although the main focus remains on safeguarding the personal data as such.

However, another aspect of the GDPR is that it places responsibilities primarily on controllers and processors. This implies that when the controller has not been involved in the development of the system, the GDPR will not regulate it comprehensively. Instead, it will apply separately to each controller based on the processing they carry out – meaning that also those obligations in the GDPR that have a more holistic and prognostic element to them, such as the obligations of making an impact assessment or to design systems by the principles of data protection by design and default may be implemented in a way that is fragmented in relation to the final use and implementational setting of the system. So, when a public social security administration opts to purchase a type A system from a private actor, this means that the GDPR compliance responsibility in the development phase lies with that private actor. It may, however, be noted that that Recital 78 stresses that if public authorities (such as here public social security administrations) are procuring systems, they should consider the principles of data protection by design and by default.Footnote49 This implies a responsibility resting with public authorities to take overall responsibility for ensuring that data protection principles are backed and upheld even when the immediate responsibility rests upon another (private) actor. It should, nevertheless, also be stressed that systems such as those of type A, which are intended for deployment within highly specific as well as regulated areas like social security administration and decision-making, are likely to have at least partially been developed or customised by the responsible authority. In such cases, the GDPR will have more leverage over the development phase of such systems in relation to what the systems will ultimately be used for.

2.3. Applicability of the AIA for type A systems

What about the AIA for type A systems then? Rule-based systems might rely on AI technologies to process information, meaning that they are not a mutually exclusive category in relation to AI systems. However, it seems likely that most rule-based systems coded to make fully automated benefit decisions have completely or at least clearly dominant elements of predefined rules and logic.Footnote50 The AIA definition of AI aligns with the revised approach and definition proposed by OECD in November 2023, which includes machine-based systems that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. The Council’s press statement (as an example) attributes this alignment to the clarity provided by the OECD definition in distinguishing AI from simpler software systems – marking a wish to avoid the inclusion of less advanced systems that have been deployed for a long time.Footnote51 Under the AIA, the determination of whether a rule-based system will meet the AI definition will clearly necessitate a detailed assessment of the specific system's technical configuration. For the purposes of this article, however, it seems likely that type A systems will not trigger the application of the AIA, and the following analysis will proceed on that presumption. Consequently, the regulation's scope will not be further discussed in relation to type A systems.

2.4. Summary remarks

In summary, the utilisation of systems of type A in social security benefits administration triggers the applicability of the GDPR but is not likely to trigger the AIA. In the GDPR, the public interest nature of the task opens up its provisions for a rather large portion of national discretion through allowing for the installation of specific national provisions which can qualify the processing as lawful under the general provisions outlined in Articles 6 and 9, as well as for solely automated decision-making under Article 22. This approach allows for national flexibility though alignment with national legal frameworks on social security, and thus recognises the often context-specific considerations involved in the administration of social security benefits. Framed from another perspective, however, the approach also allows for the persistence of regulatory diversity across Member States. This means that the harmonising influence of the GDPR on the permissibility of, and safeguards for, automated decision-making may not be as strong within the field of social security administration. The GDPR impact for those administrations shifting from manual to automated decision-making procedures by the aid of type A systems, in terms of the basic legality of such practises, is thus fairly low as long as there are eligibility conditions laid down in law (enabling an assessment of the necessity of the processing) and as long as automated decision-making is a lawful practise.

3. System type B – machine learning AI systems used to support manual decisions on benefits eligibility

As indicated in the introduction, it is also possible to discern a trend of increasing curiosity and innovation regarding the utilisation of machine learning AI systems within the social security realm – here called type B systems. While these types of systems do share some merits and challenges with the rule-based systems of type A, their quintessentially different operational logics also set them apart in terms of their aptness to mimic legal reasoning and successfully execute lawful decisions. Machine learning systems draw conclusions based on statistical data and can adapt their logic to improve accuracy. Instead of premising on the manual translation of rules into code, machine learning systems identify patterns which they learn from data such as, for example, past decisions, judgments, or case evidence. Machine learning AI-systems thus function by making statistical inferences rather than operating on subsumption logics.Footnote52 It should be pointed out that ‘AI’, or ‘machine learning’ is not one monolithic block, but may include a diverse range of techniques, algorithms, and approaches.Footnote53 The inherent opacity and functional logic that contrasts with legal reasoning at both an ontological and epistemological level, however, generally makes machine learning AI systems riskier to utilise in contexts where regulations are to be applied – as it is difficult to ensure that the system does not take account of circumstances that go beyond what is legally relevant, and thus also difficult to ensure that the system makes judgements that align with legal reasoning and legal criteria. Consequently, in public decision-making practises such social security benefits allocation, which are regulated by law as well as meant to ensure lawful exercises of power, machine learning AI systems are more often used to assist administrative tasks rather than making fully automated decisions.Footnote54

One example of a type B system can be found in the US setting, where the so-called Insight system helps administrators to analyse draft decisions on eligibility by identifying and directing their attention towards flagged potential quality issues in the draft. Insight applies natural language processing, which is an AI technology, to extract information from a written decision and combines it with structured data from workload systems to both apply rule-based and probabilistic machine learning algorithms, and is thus based on a combination of rule-based and machine learning technologies.Footnote55 Another example could be taken from the Swedish social security context, where a machine learning AI system called SAMU, which is also based on natural language processing technologies, is deployed to help administrators direct their attention towards passages in medical certificates which are relevant for assessing claimants’ work ability based on criteria related to their functioning, disability, and health (which are determining factors for sickness or activity compensation benefits).Footnote56

3.1. Ramifications of Article 22 GDPR for type B systems

As systems of type B does not make any decisions, they do not trigger the application of Article 22 GDPR. It is, however, worth noting that Article 22's concept of ‘decision’ is not confined to the strictly public sphere, which raises the question of whether also such positions taken by an automated system that do not characterise as a final decision in the sense of administrative law can constitute a decision under the article.Footnote57 It is also worth noting that when the elements of human supervision over an otherwise fully automated process are minimal – such as if a machine-learning AI system generates a preliminary decision that a human regularly approves and implements – it may still qualify as solely automated within the meaning of Article 22.Footnote58 The GDPR thus expresses a functional rather than strictly technical view of solely automated decision-making. Even so, the types of tasks that our example systems of type B perform are unlikely to trigger the application of Article 22. This is because the involvement by human administrators that is required to assess and balance the information flagged or recommendations made by the system is quite substantial – meaning that the system’s engagement with the final decision-making likely is too indirect to trigger the article.

3.2. GDPR lawfulness of utilising Type B systems in benefits administration

Just as in the case of type A systems, the general GDPR provisions that govern the processing of personal data applies to type B systems. As these latter systems rely on machine learning technologies, this renders their operations and functionality data-driven in a two-pronged sense. First, their functionality hinges on a training phase, during which the system utilises data to learn and adapt its operational logic from these.Footnote59 Second, their functionality hinges on an operational phase, in which they utilise their learned insights to make real-time predictions based on new data inputs. The data utilised in each of these phases might be personal data, and the legal conditions for processing these data might differ depending on which of these phases the processing is carried out during.

In the training phase, type B systems typically require training with data that meets the criteria of personal data to effectively evaluate similar data during the operational phase.Footnote60 Other aspects are that a considerable amount of training data is typically crucial, and that there might be a need for bias-conscious data selection practises to avoid the systems replicating biases in the data. This raises important questions about whether and how the GDPR addresses these likely features of type B systems. Here it may, initially, be noted that the GDPR does not contain any specific provisions for training data. In the determination of a legal basis for processing training data which qualify as personal data, the general provisions in Articles 5, 6 and 9 GDPR will therefore be the starting point. The specific legal basis for such processing may also depend on whether the system is developed by a private entity or a public authority, such as social security administrations themselves. If the system is developed by a private company, the options may include relying on legal bases such as legitimate interests or consent, which are applicable in the private sector (as opposed to in the public sector).Footnote61 Since the primary emphasis of this article revolves around the fundamental legality of public social security administrations using type B systems, I will refrain from delving into the potential legal foundations for private entities. Also, it is worth noting that even in those cases where social security administrations purchase ‘pre-trained’ type B systems, they may often need to train them further themselves to fine tune the system’s functionality for the specific domain use. So, in cases where the type B system is either fully developed or further trained by public social security administrations themselves, the legality of processing personal data during the training phase should be evaluated in accordance with Article 6(1)(e) for personal data and Article 9(1)(g) for special category data. And, as already noted, both these bases contain opening clauses which refer to a further legal basis in either Union of Member State law.

As the training data often consist at least partially of data collected from previous cases, their processing therefore gives rise to questions about compliance with the GDPR purpose limitation principle, which requires that data should not be further processed in a manner that is incompatible with the initial specified, explicit and legitimate purposes they were collected for, Article 5(1)(b).Footnote62 However, the tension arising from the purpose limitation principle is mitigated in those cases where the further processing of data aligns with Union or Member State laws, as Article 6(4) makes it clear that such processing does not conflict with the principle. This reduces its impact on principles within public sector applications, such as those in social security administration.Footnote63 There must, however, exist a clear legal mandate that requires or allows for the new processing. In other cases, the new purposes for processing must pass the test for being compatible with the initial purposes.Footnote64 Furthermore, the principle of fairness in Article 5(1)(a) GDPR also strives to combat discriminatory practices, thereby indirectly mandating a thorough examination of the training data for discriminatory biases that could potentially harm data subjects during its processing.Footnote65 However, the precise ramifications of the fairness principle concerning bias remain uncertain, as both the GDPR and CJEU case law lack specific guidance in this regard. Notably, the EDPB in its binding decision on the dispute submitted by the Irish SA regarding TikTok Technology Limited has recently stressed that the GDPR fairness principle should be construed as an independent ground of possible GDPR infringement.Footnote66 By stressing that fairness should also protect against processing practices that are detrimental and discriminatory to the data subject, the EDPB construed fairness as a substantive principle that extends beyond mere informational fairness. In this broader context, the ramifications of fairness is not limited solely to transparency (which demands that data subjects are not deceived or misled about the processing of their data).Footnote67 When construed in this substantive way, the GDPR by proxy of the fairness principle holds governing potential in relation to bias and design of automated systems, and places a general obligation on social security administrations to ensure equitable treatment through their data practices.

Additionally, the sheer volume of training data commonly used and its alignment with the data minimisation principle in Article 5(1)(c) is another crucial consideration.Footnote68 One GDPR challenge here is that bias washing in training data often presupposes the processing of data which qualifies as special category data under Article 9 GDPR, and which therefore is subject to a presumption of prohibition of processing unless (as noted being of relevance in this article) there is a basis in Union or Member State law that allows for a derogation to be made. According to van Bekkum and Zuiderveen Borgesius, no national lawmaker in the EU, nor the EU, has yet adopted a specific law that enables the use of special category data for auditing AI systems.Footnote69 However, in contrast, since public social security administrations engage in activities based on legal mandates, and as the GDPR does not mandate explicit references to Union or Member State laws for them to qualify under an opening clause – a pertinent question arises. Can a statutory obligation to ensure the legal and efficient administration of benefits, or an obligation to meet information security requirements, serve as a (Member State level) legal basis for processing training data within the GDPR? As an example, the Swedish standpoint has been that testing activities are typically seen as an essential administrative measure required to facilitate the fulfilment of an authority's statutory duties, and that therefore no explicit mandate for testing activities is deemed necessary.Footnote70 Against the backdrop that the latter type of interpretation would relax the GDPR’s impact on training data utilisation in regulated public sector settings, as well as against the backdrop of the discussion in section 2.1 on that the CJEU has emphasised the alignment of national regulations with the fundamental data protection principles in Article 5 GDPR as well as the legal bases for processing in Article 6 GDPR, the national regulatory mandate would at least need to be sufficiently precise to determine what personal data processing is authorised, for what reasons and what safeguards are in place. Clarifications in future case law would be valuable here.

Also, given these typical features and recognising that both the purpose limitation and data minimisation principles should be interpreted narrowly for special category data, it is worth noting that Article 10(5) AIA will introduce a specific authorisation for processing special category data if strictly necessary for the purposes of bias detection and correction in high-risk AI systems. A condition for such processing is that the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data. This authorisation will also come with obligations of securing safeguards, including technical limitations and state-of-the-art security and privacy-preserving measures, such as pseudonymisation or encryption. This provision thus denotes an attempt to address the paradox that arises from the fact that high-risk AI systems might need to process vast amounts of special category data to function properly and fairly, and that the processing of such data thus might be needed in order to protect that same data or other future data.Footnote71 With the entering into force of the AIA, the legal landscape for GDPR compliant training of AI systems such as type B systems thus seem to improve. However, until a precise interpretation of the term ‘strictly necessary’ is established in this context, the AIA provision also poses difficulties for social security administrations aiming to ground their processing of special category data on this provision.Footnote72

Following strategies for ensuring access to vast amounts of data as well as strategies to implement data protection by design and by default principles under Article 25 GDPR, it seems that it has also become more common to generate and utilise so-called synthetic data during the training phase. The Swedish Social Insurance Agency, for example, have used a combination of personal and synthetic data to train the so-called SAMU system as mentioned in the chapter introduction.Footnote73 Synthetic data refers to artificially generated data that mimics the characteristics of real data but does not directly correspond to any specific individual's personal information. Since synthetic data are not derived from actual individuals and do not contain real personal data, it might be argued that they fall outside the scope of the GDPR. This, however, depends on whether the data in combination with other data might allow for identification through inference.Footnote74 As discussed by, amongst others, Bygrave, the GDPR, as it only applies to personal data, leaves aggregate or group data that cannot be readily linked to a particular identifiable outside of its ambit.Footnote75 This circumstance weakens the potential for the GDPR to protect collective entities from collective risks. As Bygrave also points out, however, the GDPR definition of ‘personal data’ is expansive with the main emphasis placed on ‘identifiability’, where case law for example has made clear that combinations of data sets can render the data identifiable even in cases where the controller is not in control of all the data needed to achieve identification.Footnote76 Emphasis is thus placed on the technical possibility to identify someone through the combination of data rather than whether it is likely that such efforts will be made.

Hence, as previously mentioned, the training phase of a type B system typically demands substantial data, setting it apart from type A systems in this regard. However, when focusing on the system's operational phase, it is less certain that the system will require the processing of more or different data compared to a type A system, or that it presents privacy issues that are truly distinctive. Take, for instance, systems like Insight and SAMU mentioned earlier, which analyse decision designs or the content of medical certificates which serves as evidence in individual cases. These systems do not necessarily require more input data to process than what is available for each specific case. Consequently, the considerations that social security administrations need to make when assessing the conditions for personal data processing in the operational phase resemble those required for a type A system, as discussed in the preceding section.Footnote77 From a GDPR perspective, automated processing occurs regardless of the specific technology or technologies on which the system is constructed, and regardless of whether the data are processed as part of an automated decision-making procedure or for other reasons.

3.3. Applicability and ramifications of the AIA for type B systems

Unlike systems of type A, type B systems, as they are based on machine-learning technologies, will trigger the application of the AIA. Any system based on machine learning technologies will, namely, qualify as AI under Article 3(1) AIA. That the AIA will apply, however, does not mean that the full force of the regulations’ obligations will apply for a specific system.

The AIA is built around an even more pronounced risk-based approach than the GDPR, where the strictness of the regulatory regime increases with the higher the risk that the AI system is perceived to pose. The risk classification to which systems of type B are allocated therefore greatly affects the scope of the requirements imposed by the AIA on such systems. Of interest here is that Annex III (5)(a) AIA qualifies as high-risk those AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services. Recital 37 of the proposal recognises the power imbalance at play when public authorities deploy AI systems in their benefits and services. It states that natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services, and in a vulnerable position in relation to the responsible authorities. It also adds that AI systems which are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, including whether beneficiaries are legitimately entitled to such benefits or services, may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. The recital thus explicates the justification for the high-risk classification in Annex III(5)(a), as well as references some fundamental rights listed in the CFR.Footnote78

The delineation that Annex III (5)(a) offers on high-risk applications concerning the allocation of benefits invites to both some conclusions and some questions. The wording of the provision makes clear that the emphasis is on there being a connection between the AI system's functions and the actual evaluation of benefit entitlement for it to be classified as high risk. It also makes clear that the high-risk classification extends beyond cases where AI systems are deployed to make fully automated eligibility decisions. While this clarification sheds some light, it also raises questions about the degree of proximity required for this connection and the future need for further clarifications on the closer scope of Annex III (5 a). All in all, however, the definition is a fairly broad one, meaning that most AI-performed tasks in social security administration which involve assessing aspects of or deciding on benefits eligibility are likely to qualify the system use as high-risk.

Returning to the initially mentioned examples, where the so-called Insight system flags potential quality issues in draft decisions, this system might not operate in a close enough intertwinement with the eligibility assessment and decision-making for it to qualify as high-risk.Footnote79 Considering the other mentioned example, the Swedish SAMU system, which assist case administrators in interpreting medical certificates in relation to eligibility criteria by directing their attention towards passages in the certificates which are likely to be relevant to the assessment, this system’s functionality is more intertwined with the eligibility assessment. Nevertheless, it remains uncertain whether this intertwinement is strong enough to fall under the Annex III(5a) definition. For the purposes of this article, however, I will proceed on the assumption that most type B systems containing elements of assessment which relate to eligibility criteria that are utilised in the case administration of social security benefits claims will qualify as high risk.

As established in Article 8 AIA, high-risk systems must comply with several requirements. For a system of type B, a risk management system must thus be established, implemented, documented as well as maintained. As type B systems make use of techniques involving the training of models with data, Article 10 AIA furthermore requires them to be developed based on training, validation and testing data sets that meet certain quality criteria. These include, amongst other, appropriate data governance and management practices to ensure the use of relevant representative, complete and error free data sets for the training phase of the system. It should also be noted that ‘data’ is here not confined to personal data (but includes non-personal data as well both real factual data and synthetic data). The AIA thus addresses the question of biased data both in the training and operational phase more directly than the GDPR does.

The high-risk classification of type B systems also comes with rather extensive obligations for the provider of the system to supply technical documentation (Article 11) and ensure the keeping of records (Article 12). Further provider obligations relate to transparency requirements including an obligation to supply comprehensible instructions of use (Article 13), an obligation to ensure that the system is technically equipped to allow for human oversight (Article 14), as well as obligations to secure that the systems perform with an appropriate level of accuracy, robustness and cybersecurity (Article 15). Furthermore, there is an additional requirement to establish a quality management system to ensure compliance, where Articles 16–17 outline the specifics. It may also be noted that Recital 54 states that public authorities using high-risk AI systems for their own purposes have the option to adopt and implement these quality management rules at a national or regional level, thus allowing for some flexibility to consider the unique characteristics of their sector as well as the competencies and organisation of that authority. When social security administrations act as deployers of type B systems they must also, before putting the system into use, perform a fundamental rights impact assessment. This involves a thorough examination which encompasses defining the system's purpose and scope, identifying affected individuals and groups, ensuring compliance with relevant fundamental rights laws, evaluating foreseeable impacts, assessing risks to marginalised or vulnerable groups, considering environmental consequences, and formulating a detailed plan for mitigating identified harms. Additionally, this process mandates the establishment of a governance system, which may include elements like human oversight, complaint-handling, and redress mechanisms.Footnote80

An in-depth analysis of the above-mentioned obligations is not expedient here, but a few notes could be made on this compliance framework to clarify its regulatory design. First, it should be noted that these obligations revolve around the system design and implementation/use. In contrast with the GDPR, which in addition to the important and previously discussed opening clauses of the Articles 6, 9 and 22 also contains numerous other opening clauses which allows for Member States to shape their data protection regimes to cater to public sector interests, another notable aspect of the AIA is that it does not. While the AIA does pursue a number of overriding reasons of public interest, as stated in Recital 1 AIA, when it comes to the possible utilisation of AI systems for public interest uses, the explanatory memoranda of the Commission proposal explicates that the regulation aims to ensure a level playing field between public and private actors.Footnote81 Consequently the AIA does not permit the same degree of divergence, at least in principle, through Union or Member State laws as the GDPR does. Notably, this distinction is also most pronounced in scenarios involving AI systems for public sector applications, such as for social security administration purposes.

3.4. Summary remarks

As mentioned, the GDPR allocates responsibilities between ‘controllers’ and ‘processors’ of personal data, which means that the obligations outlined in the regulation are tied to the usage of personal data itself. One here notable effect of this distribution is that it, at the time the system is used for data processing, becomes immaterial whether a controller or processor developed the automated system themselves or not. As the AIA, instead, distributes its obligations between ‘providers’ and ‘deployers’ of AI systems, where the absolute lion part applies to providers, the obligations are tied to the operational aspects of the systems rather than the data. As put by Jacobs and Simon, the AIA’s approach to assign obligations to fixed addressees means that it circumvents the necessity to engage with the possibly ambiguous setup of competencies and capabilities of actors involved in developing, deploying, and operating AI systems.Footnote82

For public administrations, like social security administrations, using machine learning technologies this distribution, however, also implies that they might potentially avoid the comprehensive compliance requirements of the AIA by acquiring AI systems from external providers instead of developing them internally. This approach, viewed from a public interest as well as rule of law perspective, could introduce accountability and legitimacy gaps. It should, however, be added that Article 3(2) AIA includes in its definition of ‘providers’ not only those who develops an AI system, but also those who has an AI system developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge. This means that public administrations cannot escape the compliance framework by contracting external parties to develop a tailored AI system for them. It should also be added that even in cases where public administrations would acquire an already developed system ‘off the shelf’ to utilise its functionalities in social security case administration, there is a chance that the agency may come to assume the responsibility of a provider. Article 28 AIA, namely, states that any deployer should be considered a provider for the purposes of the regulation if they put their name or trademark on a high-risk AI system already placed on the market or put into service, if they make a substantial modification to a high-risk AI system in a way that it remains high-risk and if they modify the intended purpose of an AI system which has not been classified as high-risk in such manner that it becomes high-risk.Footnote83 In such cases, the initial provider are relieved of its duties under the act.

Article 3(23) AIA defines substantial modification as a change to the AI system following its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment and as a result of which the system’s compliance with the Chapter 2 requirements for high-risk AI systems is affected or results in a modification to the intended purpose for which the AI system has been assessed. Also, Recital 66 AIA indicates that changes to an AI system which follows from the self-learning aspect of machine learning systems should not constitute a substantial modification, provided that those changes have been predetermined by the provider and assessed at the moment of the conformity assessment. Unclarities thus remain as to what is meant by substantial modification in this context. It, however, seems likely that an AI system that is tailored enough to assist social security administrations on tasks that relate to case administration for benefits allocation would often necessitate modification either in its functionality or purposes of use, to adapt the system to the specific administrative tasks at hand. While a careful assessment would need to be made, this aspect of the AIA’s delegation of obligations tightens the door a bit for social security administrations to escape the full grasp of the AIA for systems of type B.

4. Conclusions

This article has offered a structural analysis of the differences in how the GDPR and AIA applies for two commonly deployed types of automated systems in social security administration. I will now try to summarise this two-pronged analysis by focusing on the combined impact that the GDPR and the AIA will have on each of these types of systems on the one hand, and on the differences in how these two instruments will determine the basic legality of utilising type A or B systems in social security administrations on the other.

First off, the analysis shows that systems of (rule-based) type A used for making fully automated benefits eligibility decisions will most likely only trigger the application of the GDPR. Any personal data processed by the type A system will need to comply with the full body of relevant GDPR provisions, although the public interest status of social security administration opens up the regulation for sector specific applications through its opening clauses. Given that social security provisions remain largely a matter of national law, resulting in considerable variation in the types of benefits provided as well as in their administration, this context significantly influences the application of the GDPR and thus the de facto protective regime for those personal data that are processed by type A systems to make eligibility decisions. As elucidated in Recital 15 of the GDPR, the regulation adopts a horizontal and technologically neutral regulatory framework.Footnote84 This signifies that whether data undergo processing with computational assistance but are subsequently evaluated by a human administrator, or if the data are processed autonomously without any human intervention or manual evaluation, it does not impact the application of the GDPR core principles. However, the type of automated means which the personal data are processed with may influence ancillary factors such as the aspects to consider in a DPIA, what measures to be taken to adhere to privacy and data protection by design principles, as well as the assessment of whether there are adequate safeguards in place to ensure GDPR-compliant processing. For type A systems, Article 22 GDPR will also apply in addition to the general GDPR provisions. The prohibition against solely automated decision-making laid down in this article is, however, relaxed quite considerably for applications in law regulated public sector settings. The question is therefore primarily what type of safeguards these regulations must establish in order to suffice (although not a focal point of this study).

For systems of type B, the analysis shows that both the GDPR and most likely the AIA will be triggered by their use. While Article 22 GDPR will likely not apply, the GDPR principles of fairness, data minimisation and purpose limitation obliges social security administrations utilising type B systems to consider and attend to whether the personal data processing is likely to cause discriminatory effects, whether the data used to train or operate the system while in use are excessive, and whether any further use of data beyond their initial purpose of collection is lawful. However, the flexibility and reliance on a further basis in (typically) national law may provide some leeway. As for type B systems falling under the AIA's definition of AI systems and thereby invoking its application, it remains somewhat unclear how closely the AI system's functions must align with the actual evaluation of entitlement to benefits to qualify as high risk under Annex III 5(a). The fact that systems evaluating the eligibility of individuals for public assistance benefits and services, as well as the granting, reduction, revocation, or reclamation of such benefits and services, are explicitly categorised as high-risk, however and importantly, indicates a recognition of the sensitive nature of benefits allocation. It also indicates a recognition of the potential risks that automated processes may introduce in relation to rule of law principles such as legality, foreseeability, and fairness (which cannot be expanded upon in this article). The author’s opinion is that the delineation between solely and partially automated decisions, in terms of practical impact, is not only a technical concern but also hinges on those organisational aspects within the agency which determine the implications of the systems’ outputs. In practice, decision-making support systems, particularly if their outputs tend to supplant substantive human assessment, can significantly influence the outcome of eligibility decisions – even though human administrators formally make these decisions. The AIA’s applicational indifference to both complete and partial automation therefore stands as a crucial aspect of its potential protective framework concerning AI-related deployment and innovation within the social security sector. Against this background, a narrowly construed definition in Annex III 5(a) would emphasise the direct technological involvement in the decision-making procedure before the aggregated impact of the AI system on the decision-making, which might reduce this potential impact. Either way, however, the AIA's more vertical and technology-specific regulatory framework, in contrast to the GDPR, significantly affects the nature and extent of obligations placed on social security administrations using type B systems in their benefits administration.

Since the AIA does not contain as many flexibility clauses as the GDPR, it appears to be structured around the notion that public sector AI uses could have just as significant adverse effects on human and fundamental rights as their use in the private sector. The level of protection required by the AIA, especially for sensitive and high-risk practices like the allocation of social security benefits (which affect socially vulnerable groups with limited knowledge and resources to identify and challenge errors or discriminatory outcomes in the application process), is particularly noteworthy. As of yet, however, there is reason to believe that most solely automated decisions are taken by assistance of type A systems (that is static so-called rule-based systems), both within and outside the social security administrations in public sector settings. Although type A systems, in general, do not carry the same level of risks as type B systems, primarily because they are inherently more transparent due to their static characteristics and human-designed code, these very attributes may also make them susceptible to the dangers of overly rigid application when used in decision-making scenarios that demand a more nuanced and context-aware judgment beyond their pre-programmed capabilities. While different uses of AI technologies are becoming more common in this administration they are, at least to the authors’ knowledge, rarely used in fully automated practises.Footnote85 Consequently, even with the introduction of the AIA, the GDPR is likely to remain a core source of EU level protection in relation to public sector automation efforts within the social security sector.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Swedish Research Council [grant number 2020-02278].

Notes

1 T Carney, ‘Robo-Debt Illegality: The Seven Veils of Failed Guarantees of the Rule of Law?’ (2019) 44 Alternative Law Journal 4; T Carney, ‘Automation in Social Security: Implications for Merits Review?’ (2020) 55 Australian Journal of Social Issues 260; ‘A Robodebt Royal Commission Has Been Announced. Here’s How We Got to This Point’ ABC News (Sydney 26 August 2022) <https://www.abc.net.au/news/2022-08-26/robodebt-royal-commission-explained/101374912> accessed 8 December 2023.

2 M van Bekkum and F Zuiderveen Borgesius, ‘Digital Welfare Fraud Detection and the Dutch SyRI Judgment’ (2021) 23 European Journal of Social Security 323.

3 D Hadwick and S Lan, ‘Lessons to Be Learned from the Dutch Childcare Allowance Scandal: A Comparative Review of Algorithmic Governance by Tax Administrations in the Netherlands, France and Germany’ (2021) 13 World Tax Journal 609.

4 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC.

5 At the time of writing, the final text of the AIA has not yet been adopted. References to the AIA in this chapter are based on the February 2024 text of the provisional agreement resulting from interinstitutional negotiations between the European Parliament and the EU Council of Ministers. This text outlines the content of the Regulation but may undergo minor, primarily editorial changes before final adoption. <https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/AG/2024/02-13/1296003EN.pdf> accessed 15 April 2024.

6 This means I will not be extensively analysing the specific content and scope of these obligations themselves. It also means that issues related to the rights or opportunities of the subjects of the decision, such as issues of transparency, informational rights, or the right to complain, will not be the focus of the analysis.

7 Articles 1 and 2 GDPR.

8 Articles 1 and 2 AIA.

9 Basic legality here refers to the fundamental or core aspects of whether it is legally permissible to use A or B type systems.

10 Q Liu, B Islam and G Governatori, ‘Towards an Efficient Rule-Based Framework for Legal Reasoning’ (2021) 224 Knowledge-Based Systems 107082.

11 Swedish Social Insurance Agency [Försäkringskassan], ‘Försäkringskassans Årsredovisning 2022’ (2022), 36.

12 I will here presume that all data from individual claimant’s case files that are fed into an automated system that makes individual decisions on eligibility for social security benefits will qualify as personal data under the extensive definition of personal data laid down in Article 4(1) GDPR.

13 Case C-634/21 OQ v Land Hesse EU:C:2023:957, para 52.

14 The other derogations allow for such processing if necessary for entering or performing contracts, Article 22(2)(a), and if it is based on the data subjects explicit consent Article 22(2)(c) GDPR. None of these derogations will typically enable processing by public authorities, not the least since the GDPR indicates a hesitant position towards whether consent can be freely given due to the power imbalances at play where the controller is a public authority, Recital 43 GDPR.

15 Council of Europe, Convention for the Protection of Human Rights and Fundamental Freedoms (European Convention on Human Rights, as Amended) [1950].

16 Charter of Fundamental Rights of the European Union [2000] OJ C 364.

17 Articles 52(3), 53 CFR; Article 1(2) and Recital 2 GDPR. See also Recital 4 GDPR and Joined Cases C-465/00, C-138/01 and C-139/01 Österreichischer Rundfunk and Others EU:C:2003:294, [2003] ECR-I 4989, paras 69–72.

18 Case C-34/21 Hauptpersonalrat der Lehrerinnen und Lehrer beim Hessischen Kultusministerium v Minister des Hessischen Kultusministeriums EU:C:2023:270, para 59; Case C-319/20 Meta Platforms Ireland Limited v Bundesverband der Verbraucherzentralen und Verbraucherverbände – Verbraucherzentrale Bundesverband eV EU:C:2022:322, para 60.

19 OQ v Land Hesse (n 13), para 68. This long-awaited ruling may have implications for social security administrations also in relation to their legal conditions for engaging in profiling practises for fraud detection purposes etcetera, but this aspect lies beyond the scope of this article’s analysis.

20 OQ v Land Hessen (n 13), paras 67–68.

21 It could also be added that Recital 41 GDPR makes clear that where the regulation refers to a legal basis or a legislative measure, this does not necessarily require a legislative act adopted by a parliament, without prejudice to requirements pursuant to the constitutional order of the Member State concerned. Such measures must, however, be clear and precise and its application foreseeable in accordance with the case-law of the CJEU and the ECtHR.

22 Section 28 of the Swedish Administrative Procedures Act (2017:900).

23 The Swedish Data Protection Authority [Datainspektionen, now Integritetsskyddsmyndigheten], ‘Yttrande Juridik som stöd för förvaltningens Digitalisering (SOU 2018:25)’ (2018) Fi2018/01418/DF; R Karlsson, ‘Den digitala statsförvaltningen – Rättsliga förutsättningar för automatiserade beslut, profilering och AI’ (2020) Förvaltningsrättslig tidskrift 75. It should be noted that where Article 22 GDPR applies, individuals subject to solely automated decisions enjoy certain additional rights which the social security administration must cater to, such as transparency and information rights. However, Article 23(1)(e) GDPR does list social security as one important objective of general public interest that can justify Union or Member State laws to restrict these transparency rights, meaning that there may be national variations in this regard. See, also V Gantchev, ‘Data Protection in the Age of Welfare Conditionality: Respect for Basic Rights or a Race to the Bottom?’ (2019) 21 European Journal of Social Security 3, 12.

24 OQ v Land Hesse (n 13), Opinion of Advocate General Pikamäe, paras 63–66.

25 OQ v Land Hessen (n 13), paras 68–72.

26 Rule-based systems typically do not necessitate extensive training with vast quantities of personal data since their functional logic is predefined through coded sets of rules that incorporate eligibility criteria and other relevant considerations. However, during the testing phase and to validate their functionality, these systems may employ personal data. Attention to the application of the GDPR for such possible testing operations will not be given here. However, see the forthcoming discussion relating to type B systems, where issues of system training are discussed more in depth.

27 It should be noted that Article 9(2)(b) GDPR could also be considered as a viable legal basis for public social security administrations to process special category data. This provision allows for processing necessary for the purposes of carrying out the obligations and exercising specific rights of the controller or of the data subject in the field of employment and social security and social protection law in so far as it is authorised by Union or Member State law or a collective agreement pursuant to Member State law providing for appropriate safeguards for the fundamental rights and the interests of the data subject. While this provision specifically mentions social security, the focus of this analysis is the exercise of public powers integrated in the public social security administrations’ administration of social security benefits. The focus of the discussion will thus be on the Article 9(2)(g) derogation, as it could be argued that this is the most appropriate basis. However, both derogations (that is both (b) and (g)), rely on the processing having Union or Member State legal recognition, while the former additionally also consider collective agreements as a viable basis.

28 When basing the processing on Article 6(1)(e) GDPR, Member States may also maintain or introduce more specific provisions by determining more precisely the specific requirements for the processing and other measures to ensure lawful and fair processing, Article 6(2) GDPR.

29 Notably, when processing is based on Article 6(1)(e) GDPR, the specific purpose of the processing does not need to be determined in the legal basis but shall be necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, Article 6(3) GDPR.

30 Hauptpersonalrat der Lehrerinnen und Lehrer beim Hessischen Kultusministerium v Minister des Hessischen Kultusministeriums (n 18), para 59; Meta Platforms Ireland Limited v Bundesverband der Verbraucherzentralen und Verbraucherverbände – Verbraucherzentrale Bundesverband eV (n 18), para 60.

31 C-184/20 Vyriausioji Tarnybinės Etikos Komisija EU:C:2022:601, para 89.

32 Joined Cases C-293/12 and C-594/12 Digital Rights Ireland Ltd v Minister for Communications, Marine and Natural Resources and Others and Kärntner Landesregierung and Others EU:C:2014:238, para 47.

33 When public social security administrations utilise type A systems for making decisions based on personal data, they act as controllers under the GDPR, Article 4(7), with the determination of purposes being the decisive element, Article 29 Data Protection Working Party, ‘Opinion 1/2010 on the Concepts of “Controller” and “Processor”’ (2010) 00264/10/EN WP 169 13.

34 See M Naarttijärvi and L Enqvist, ‘Administrative Independence Under EU Law: Stuck Between a Rock and Costanzo?’ (2021) 27 European Public Law 707, on that administrative authorities at Member State level might be required to act independently from the national hierarchical order in their application of law to ensure the loyal and effective enforcement of EU law.

35 C-13/16 Valsts policijas Rīgas reģiona pārvaldes Kārtības policijas pārvalde v Rīgas pašvaldības SIA ‘Rīgas satiksme’ EU:C:2017:336, para 30 and cited case law.

36 C-708/18 TK v Asociaţia de Proprietari bloc M5A-ScaraA EU:C:2019:1064, para 48.

37 Article 4(7) GDPR. Where processing of personal data is carried out by a public authority or body, Article 37(1)(b) lays down an obligation to designate a data protection officer. The tasks include informing and advising the controller or processor on data protection obligations, monitoring compliance, providing advice on data protection impact assessments, cooperating with the supervisory authority, and serving as a contact point for processing-related issues, all while considering the associated risks of processing operations, Article 38 GDPR. The status and tasks of the data protection officer will, however, not be further elaborated here.

38 Article 35 (1) GDPR; K Demetzou, ‘Processing Operations ‘Likely to Result in a High Risk to the Rights and Freedoms of Natural Persons’’ in L Antunes and others (eds), Privacy Technologies and Policy (Springer International Publishing, 2020) 25–42.

39 Article 29 Data Protection Working Party, ‘Guidelines on Data Protection Impact Assessment (DPIA) and Determining Whether Processing Is “Likely to Result in a High Risk” for the Purposes of Regulation 2016/679’, 9; A Kasirzadeh and D Clifford, ‘Fairness and Data Protection Impact Assessments’ (Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021) <https://dl.acm.org/doi/10.1145/3461702.3462528> accessed 15 April 2024.

40 Of note is that Article 36(5) GDPR features a provision that permits Member State legislation to demand controllers to engage in consultations and prior authorisation from the national supervisory authority concerning controller-initiated data processing tasks conducted for the purpose of fulfilling a public interest task. This includes processing related to social protection and public health.

41 Article 29 Data Protection Working Party (n 39), 8, 12.

42 ibid 9.

43 It may also be mentioned that Article 36(1) GDPR mandates prior consultation with the supervisory authority when the DPIA indicates that the processing would result in a high risk in the absence of measures taken by the controller to mitigate the risk.

44 Thus, while a shift from manual to automated procedures may not require legislative intervention, targeted legislative measures could eliminate the need for a DPIA.

45 Article 29 Data Protection Working Party (n 39), 8.

46 L A Bygrave, ‘Data Protection by Design and by Default: Deciphering the EU’s Legislative Requirements’ (2017) 4 Oslo Law Review 105, 113 f.

47 L A Bygrave, ‘Machine Learning, Cognitive Sovereignty and Data Protection Rights with Respect to Automated Decisions’ in E Stefanini and others (eds), The Cambridge Handbook of Information Technology, Life Sciences and Human Rights (Cambridge University Press, 2022) 166–188, 184. Bygrave primarily referred to a context of machine learning AI technologies but remains applicable to more fixed systems like type A systems as well.

48 ibid 185.

49 It may be noted that the Commission in September 2023 has published AI model clauses for use in the procurement of AI by public organisations, European Commission, ‘EU model contractual AI clauses to pilot in procurements of AI’ (29 September 2023) <https://public-buyers-community.ec.europa.eu/communities/procurement-ai/resources/eu-model-contractual-ai-clauses-pilot-procurements-ai> accessed 8 December 2023.

50 S Alon-Barkat and M Busuioc, ‘Human–AI Interactions in Public Sector Decision Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice’ (2023) 33 Journal of Public Administration Research and Theory 153.

51 Article 3(1) and Annex I AIA; European Council of the European Union, ‘Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world’ (9 December 2023) <https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/> accessed 9 December 2023.

52 M Hildebrandt, ‘Algorithmic Regulation and the Rule of Law’ (2018) 376 Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences 20170355.

53 J Oster, ‘Code Is Code and Law Is Law—the Law of Digitalization and the Digitalization of Law’ (2021) 29 International Journal of Law and Information Technology 101, 105.

54 Alon-Barkat and Busuioc (n 50) 153.

55 K Glaze, D Ho, G Ray and C Tsang, ‘Artificial Intelligence for Adjudication: The Social Security Administration and AI Governance’ in Handbook on AI Governance (Oxford University Press, 2022) <https://ssrn.com/abstract=3935950> accessed 8 December 2023.

56 Swedish Social Insurance Agency [Försäkringskassan], ‘Projektet SKOSA 2, Summering av kunskaper och insikter för extern målgrupp’ (2021).

57 OQ v Land Hesse (n 23), Opinion of Advocate General Pikamäe, para 39.

58 Article 29 Data Protection Working Party, ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679’ (WP 251 3 October 2017), 9.

59 P Hacker, ‘Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against Algorithmic Discrimination under EU Law’ (2018) 55 Common Market Law Review 1143, 1146 f.

60 It may be noted that training data might often consist of mixed datasets containing both personal and non-personal data, potentially leading to the inference of personal data from non-personal data through the combination of different data.

61 It may be noted that public authorities are not formally precluded from basing their processing on consent, but that the GDPR, as stated in note 14, takes a restrictive view of this possibility due to the unequal balance of power in play.

62 I Hahn, ‘Purpose Limitation in the Time of Data Power: Is There a Way Forward?’ (2021) 7 European Data Protection Law Review (EDPL) 31; M Butterworth, ‘The ICO and Artificial Intelligence: The Role of Fairness in the GDPR Framework’ (2018) 34 Computer Law & Security Review 257, 260.

63 It may be noted here that there is debate over whether Article 6(4) is to be construed as a ‘real’ opening clause which generally allows for deviations from the purpose limitation principle by legislative measures, or whether it is to be read in conjunction with Article 6(2) and 6(3) GDPR, meaning that Member States may only regulate processing of personal data for a different purpose in those cases referred to in Article 6(1)(c) and (e) (thus to fulfil legal obligations or perform tasks in the public interest). On this, see M Mikiver and P Krõõt Tupay, ‘Has the GDPR Killed E-Government? The “Once-Only” Principle vs the Principle of Purpose Limitation’ (2023) 13 International Data Privacy Law 194, 199 f. Further processing may also be based on consent, but see note 14 on that this ground has limited application when authorities process personal data.

64 Article 6(4) GDPR.

65 It may be noted that the CFR, in Article 8(2), also underscores the fairness principle in relation to the right to data protection. See EDPB ‘Guidelines 4/2019 on Article 25 Data Protection by Design and by Default Version 2.0’ (2020), para 69 which replaced the version 1.0 of the EDPB ‘Guidelines 4/2019 on Article 25 Data Protection by Design and by Default’ (2019), para 64; Binding Decision 2/2023 on the dispute submitted by the Irish SA regarding TikTok Technology Limited (Art 65 GDPR) (European Data Protection Board), para 101; Hacker (n 58), 1172 f.

66 Binding Decision 2/2023 on the dispute submitted by the Irish SA regarding TikTok Technology Limited (n 64), para 100.

67 A Palumbo, ‘The Unexplored Potential of the Fairness Principle under the GDPR: Lessons from the Recent TikTok Case’ (CITIP blog, 10 October 2023); S Barros Vale and G Zanfir-Fortuna, ‘Automated Decision-Making Under the GDPR: Practical Cases from Courts and Data Protection Authorities’ (Future of Privacy Forum 2022), 13 <https://fpf.org/wp-content/uploads/2022/05/FPF-ADM-Report-R2-singles.pdf> accessed 8 December 2023.

68 There are methods available to reduce the reliance on training data in machine learning, including generative adversarial networks (GANs), which lessen the demand for extensive training data by generating input data from output data; federated learning, which maintains personal data locally during training; and transfer learning methods, which reuse pre-existing models. However, the need for larger data sets is typically high.

69 M van Bekkum and F Zuiderveen Borgesius, ‘Using Sensitive Data to Prevent Discrimination by Artificial Intelligence: Does the GDPR Need a New Exception?’ (2023) 48 Computer Law & Security Review 105770, 7 f.

70 Swedish Government Bill, Prop. 2019/20:113 (2020), 19 f.

71 van Bekkum and Zuiderveen Borgesius (n 68), 9. Of note is also that Recital 45 b AIA recognises that such special category personal data processing (exceptionally and to the extent that it is strictly necessary), could be done by providers to ensure bias detection and correction for high-risk AI systems as a matter of substantial public interest within the meaning of Article 9(2)(g) GDPR.

72 It is worth mentioning that the AIA provides certain options for creating AI systems within regulatory sandboxes – controlled environments where innovative technologies can be tested under relaxed regulations – which public administrations like social security authorities can also employ. However, the details and boundaries of these regulations will not be explored further here.

73 F Adolfsson ‘Därför arbetar Försäkringskassan med AI och syntetisk data’ (Voister 2 May 2023) <http://www.voister.se/artikel/2023/05/forsakringskassans-framgang-med-ai-och-syntetisk-data/> accessed 8 December 2023.

74 EDPB and European Data Protection Supervisor ‘Joint Opinion 03/2021 on the Proposal for a Regulation of the European Parliament and of the Council on European Data Governance (Data Governance Act)’, 9 June 2021, para 58; N Purtova, ‘From Knowing by Name to Targeting: The Meaning of Identification under the GDPR’ (2022) 12 International Data Privacy Law 163.

75 Bygrave (n 46) 184.

76 ibid; C-582/14 Patrick Breyer v Bundesrepublik Deutschland EU:C:2016:779, paras 31–49. See, however, A Lodie on that recent case law from the General Court indicates an emphasis of that the data recipient must be reasonably able to re-identify the data subject, and tends to view personal data as a relative rather than objective concept ‘Are Personal Data Always Personal? Case T-557/20 SRB v. EDPS or When the Qualification of Data Depends on Who Holds Them’ (European Law Blog, 7 November 2023) <https://europeanlawblog.eu/2023/11/07/are-personal-data-always-personal-case-t-557-20-srb-v-edps-or-when-the-qualification-of-data-depends-on-who-holds-them/> accessed 8 December 2023.

77 Those GDPR provisions which in section 2.2 were referenced as having a more holistic and prognostic element to them, such as the DPIA obligation or the principles of data protection by design and default will apply. Of note is that Article 35(3)(b), although type B systems not being solely automated in relation to the decision-making, will likely still qualify as high risk, as they typically require the processing on a large scale of special categories of data referred to in Article 9(1), Article 29 Data Protection Working Party (n 39), 8, 18.

78 The referenced rights to human dignity (Article 1), social security and social assistance (Article 34), non-discrimination (Article 21) and an effective remedy (Article 47) are all fundamental CFR rights.

79 Here, for pedagogical reasons, I will ignore the AIA's territorial scope of application.

80 Article 29 a AIA.

81 Section 2.2 in the explanatory memoranda of the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206 final.

82 M Jacobs and J Simon, ‘Assigning Obligations in AI Regulation: A Discussion of Two Frameworks Proposed By the European Commission’ (2022) 1 Digital Society 6, 6.

83 This also applies to distributors, importers and other third parties who make substantial modifications to the system, Article 28(1) AIA.

84 See also C-25/17 Proceedings brought by Tietosuojavaltuutettu EU:C:2018:551, para 53. Here, the CJEU stresses that an application of data protection principles that does not depend on the techniques used aims to avoid the risk protection circumvention.

85 Alon-Barkat and Busuioc (n 50) 153.