446
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Balancing the autonomy and protection of children: competency challenges in data protection law

&

ABSTRACT

This article considers some complexities surrounding the determination of child competency in matters of data protection. Focusing on the Information Commissioner's Office (ICO) guidelines, the article highlights the apparently pivotal role competency plays in granting children the ability to exercise their data protection rights and interests. The article critically examines the inherent challenges arising from the ICO's approach, emphasising the reliance on data controllers to independently assess the competency of child data subjects. The inherent problematic nature of this approach is scrutinised, shedding light on potential shortcomings and raising questions about the effectiveness and fairness of such assessments.

1. Introduction

With the use of digital technologies and engagement with online platforms ever-increasing the processing of the personal data of children has become a progressively salient matter for regulators and lawmakers.Footnote1 Given the particular vulnerability of children to exploitation and other associated harms it is widely accepted that they merit special protection from the law with regards to how their personal data are processed by others.Footnote2 Against this background the United Kingdom General Data Protection Regulation (UK GDPR)Footnote3 establishes numerous specific rules regarding the processing of children’s personal data.

In recent years the UK’s regulatory office responsible for matters pertaining to data protection, the Information Commissioner’s Office (ICO), has produced numerous guidance notes addressing questions regarding how data protection rules should apply to children. Of particular interest is the way in which the ICO’s guidance stresses the entitlement of children to exercise their data protection rights and interests so long as they are “competent” to do so.Footnote4 The apparent implication of this stance being that in situations where children are not competent, any attempt to exercise their data protection rights and interests can legitimately be refused and that responsibility for making decisions regarding the processing of their personal data will revert to a person with parental responsibility. Competency, therefore, appears to be a threshold at which point children’s data protection rights and interests become exercisable.

This position is prima facie logical. To deny a competent child the opportunity to make decisions for themselves regarding the processing of their personal data would arguably be to deny the ethic of respect for that child.Footnote5 Concurrently, in situations where a child is not competent, responsibility for decision-making reverting to an individual with parental responsibility is consistent with most other areas of law applicable to children. However, whilst the ICO encourages the exercise of data protection rights and interests by competent children, and this is something the law itself allows, the concept of competence in this context is vague and under-developed. At no point in the UK GDPR, Data Protection Act 2018, or ICO guidance is any definition of competence provided and at no point is any instruction provided in respect of how data controllers can or should assess the competence of children.

This article examines the issue of children’s competence in the context of UK data protection law and given the silence of data protection legislation and regulatory guidance on the matter, considers whether legal principles from other areas of law, specifically medical law and ethics, may shed light on how best to assess child competency in matters pertaining to data protection. Whilst a range of issues relating to children have already been traversed in the data protection law literature,Footnote6 and some authors have considered the relevance of child competence in relation to specific data protection issues,Footnote7 to date little-to-no attention has been devoted to investigating the notion competence itself in the data protection context. To this end, the objective of this paper is, to the best of the knowledge of the author, original and aims to fill a “gap” in the existing literature.

The article comprises four substantive sections. The first section explains the relevant law and guidance of the ICO, sets out how the notion of competence seemingly serves as a threshold for children’s data protection rights and interests and outlines how, despite its apparently important role, the notion of competence itself is underdeveloped. The second section introduces the notion of Gillick competence, a notable and symbolic principle of medical law used for making determinations regarding whether a child is competent to consent to medical treatment and highlights how some data controllers have started to use this as a mechanism for assessing child competence in matters of data protection. The third section then examines the possible applicability of Gillick competence to data protection-related issues and argues that, despite the willingness of some data controllers to utilise it, this is an inappropriate standard for assessing child competence in data protection contexts. The fourth section introduces several alternative approaches that may represent other means through which competence could be assessed in data protection contexts, or competence-related challenges could be circumvented entirely, and considers their possible merits and demerits. The article then concludes with a summary of its main findings and recommendations.

2. Legal background

In the United Kingdom the legal definition of a child that is used in most contexts is found in paragraph 16 of Schedule 1 of the Children Act 1989. Here, the term “child” is broadly defined as any person under the age of eighteen years of age. In other words, a person’s eighteenth birthday is the point at which they become legally recognised as an adult. It is this definition of child that will be used throughout the article.

Data protection law in the UK is primarily governed by the UK GDPR and the Data Protection Act 2018 (DPA). The two pieces of legislation complement each other and establish a numerous rules and rights regarding the processing (i.e. almost any imaginable use) of personal data.Footnote8 Given the increasing digital engagement of minors and their potentially limited understanding of data protection-related matters, Recital 38 of the UK GDPR emphasises how these rules and rights are of particular importance when data processing activities involve the personal data of a child:

“Children merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data. Such specific protection should, in particular, apply to the use of personal data for the purposes of marketing or creating personality or user profiles and the collection of personal data with regard to children when using services offered directly to a child.”

This message is reiterated in Recital 58:

“The principle of transparency requires that any information addressed to the public or to the data subject be concise, easily accessible and easy to understand, and that clear and plain language and, additionally, where appropriate, visualisation be used … Given that children merit specific protection, any information and communication, where processing is addressed to a child, should be in such clear and plain language that the child can easily understand.”

Pursuant to this, the Articles of the UK GDPR establishes several substantive rules that apply specifically to the processing of the personal data of children. In this regard, Articles 6, 8 and 12 of the UK GDPR are particularly noteworthy.

Article 6(1)(f) UK GDPR establishes that when the legitimate interests of the data controller or other third party serve as the lawful basis for the processing of personal data the interests of the data controller and/or third party must be weighed against the interests of the data subject. If the interests of the data controller or third party are “overridden” by the interests of the data subject, the processing of personal data should not go ahead on this basis. Article 6(1)(f) then further specifies that the possibility of the interests of the data subject overriding the interests of the data controller will distinctly be the case “in particular where the data subject is a child.”

Article 8(1) UK GDPR specifies that when consent is used as the lawful basis for personal data processing in relation to the offer of information society services to a child, such processing will be lawful where the child is at least 13 years old.Footnote9 In situations where a child is younger than 13 years old consent will only function as a lawful basis for personal data processing where the consent in question has been given by a person with parental responsibility for the child. Article 8(2) then further clarifies that in situations where consent is given by a person with parental responsibility, the data controller will make reasonable efforts to verify the authenticity of that person’s giving, or authorisation, of consent. It should be noted, however, that as these rules regarding the consent of a child apply specifically and exclusively when consent is sought in relation to the offer of information society services, they do not apply in relation to other situations in which consent may serve as a lawful basis for personal data processing. Accordingly, the authorisation of a person with parental responsibility is not necessarily required when consent of a child is sought in relation to data processing activities unrelated to the provision of information society services. This is potentially significant, as many types of potentially harmful personal data processing activities may not constitute, or be undertaken in conjunction with, the provision of information society services (e.g. biobanking initiatives that primarily focus on medical research and do provide services directly to individuals in an online, electronic and remunerated manner). This issue is returned to and explored in greater detail below.

Article 12(1) UK GDPR provides that when data controllers provide information under Articles 13, 14 and 15 of the UK GDPR (which establish a general duty for data controllers to ensure their personal data processing operations are transparent, and a right of access to personal data for data subjects) data controllers must provide such information in a concise, transparent, intelligible and accessible form, using clear and plain language. Here, it is emphasised that this requirement is particularly important when the information provided is addressed to a child.

There are also other data protection provisions that, whilst not specifically targeted at children, are highly relevant to child data subjects. The data subject rights, set out in Chapter 4 of the UK GDPR, are a particular example of this. These rights, amongst others, include a right to access personal data (as mentioned previously),Footnote10 a right to rectification,Footnote11 and a right to erasure.Footnote12 The right to erasure (sometimes referred to as the ‘right to be forgotten’), which entitles a data subject to obtain from a data controller the erasure of their personal data, is perhaps particularly relevant to children who may be prone to sharing personal data with another party without fully understanding the potential consequences of doing so. Furthermore, as a child’s preferences and views are likely to change over time, personal data shared in a data subject’s younger years may at some point cease to reflect their current views and values. The right to erasure, therefore, represents a legal mechanism through which they can maintain sovereignty over their identity, particularly in online environments. This sentiment is endorsed by the Recitals of the UK GDPR and ICO guidance.Footnote13

However, despite setting out several rules that are specifically applicable to situations involving the personal data of children, and including various others that are of considerable relevance to children, the UK GDPR does not always make it abundantly clear when and in what circumstances these rules become applicable, nor is it made abundantly clear when or at what point in time children become entitled to exercise their data protection rights and interests. For instance, no explicit instruction is provided in respect of whether children data subjects enjoy the same entitlement to exercise these rights as adult data subjects, nor is it clear whether, when and to what extent parental authorisation is required before these rights can be exercised.

The ICO’s guidance note Children and the GDPR, initially published in March 2018, specifically addresses these sorts of questions.Footnote14 A prominent theme that is emphasised throughout the guidance is the notion of competence. In brief, The ICO advises that children are entitled to exercise their data protection rights and interests (i.e. enjoy them in the same way as adult data subjects) so long as they are “competent” to do so. In situations where competence cannot be established, the guidance advises that it will be in the best interests of the child to allow an individual with parental responsibility to exercise their data protection rights and interests on their behalf. For example, on the issue of when and in what circumstances consent can be used as legal basis for the processing of the personal data of a child, the guidance says the following:

“Our GDPR consent guidance provides details about the various requirements for valid consent, and you need to meet all of these. In addition, you need to consider the competence of the child (whether they have the capacity to understand the implications of the collection and processing of their personal data). If they do have this capacity then they are considered competent to give their own consent to the processing … ”Footnote15

The ICO’s logic for this position is straightforward. If a child is not competent it would be impossible for their consent to be “informed”, and thus it would not meet the requirements of valid consent set out under Articles 4 and 7 UK GDPR.Footnote16 Similar advice is given in relation to situations in which a data controller processes personal data on the basis that it is necessary for the performance of a contract to which the data subject is a party:

“When you wish to enter into a contract with a child you must consider their competence to agree to the contract and understand the implications of the associated processing of their personal data.”Footnote17

The importance of child competence is again repeated and re-emphasised in the section of the guidance dealing with data subject rights:

“A child may exercise the above rights on their own behalf as long as they are competent to do so … ”Footnote18

The apparent upshot of this being that data controllers can legitimately refuse to uphold a child’s attempts to exercise their data protection rights in the event they are not deemed to be competent. These excerpts, therefore, plainly show how the ICO envisages the notion of competence as being a key boundary-marking concept in matters pertaining to the data protection rights and interests of children. It is the threshold, or cut-off point, that determines the applicability or non-applicability of the law. This position is prima facie sensible. As noted above, allowing children to make decisions affecting aspects of their personhood, such as those relating to how their personal data are used by others, recognises their dignity as human beings and is consistent with major international human rights instruments such as the United Nations Convention on the Rights of the Child. However, whilst the above-mentioned Children Act 1989 specifies that a child is any person younger than eighteen years old, in England, Wales and Northern Ireland the law does not provide a precise age or point in time at which a child either becomes, or is presumed to be, competent. Instead, competence is treated as a matter of degree depending on the level of a child’s cognition and must be assessed on a case-by-case basis. This nuance is again noted by the ICO guidance:

“In Scotland, a person aged 12 or over is presumed to be of sufficient age and maturity to be able to exercise their data protection rights, unless the contrary is shown. This presumption does not apply in England and Wales or in Northern Ireland, where competence is assessed depending on the level of understanding of the child … ”Footnote19

Against this background, in situations involving a child attempting to exercise their data protection interests, the guidance then invites data controllers to undertake their own “individual assessment of the competence” of the child,Footnote20 and when seeking to obtain consent from a child to take measures to “ensure that a child providing their own consent is competent to do so”.Footnote21 However, whilst this position appears to be consistent with the law, it again leaves a slew of unanswered questions. Despite strongly emphasising the importance of the notion of competence, and how data controllers are entitled and invited to undertake their own competence assessments, next to no counsel is provided in respect of how and in what way such exercises should be performed. Case law also provides scant guidance on this issue. This presents a problem. If competence is to serve as the key enabling concept which determines a threshold at which children can exercise their data protection rights and interests it is important that it is understood, or at least understandable, by data controllers. However, due to the above-mentioned lack of guidance it is a vague and ill-defined notion.

The vagueness of the notion of competence in this context is potentially significant. A failure to articulate a definite, or at least more precise, understanding of the concept could have various negative consequences. Without clarification, data controllers, the ICO, courts and tribunals could conceivably struggle to draw the line between competent and non-competent children. Two data controllers who adopted a similar standard of competence could, therefore, be treated differently in neighbouring courtrooms. One may walk out exonerated of any liability, while the other is found to have broken the law. Moreover, a vague standard of competence could lead to data controllers earnestly adopting vastly different standards, leading to inconsistencies in the law’s application.Footnote22 For example, on one hand, data controllers adopting a very high standard or threshold for what constitutes competence could lead to children being deprived of legal rights and protection to which they should be entitled as autonomous beings. On the other hand, data controllers adopting a very low standard for what constitutes competence could result in children being permitted to make decisions with significant lasting consequences they do not have the capacity to understand. Given the need to arrive at a more concrete definition of competence for use in data protection contexts, the question becomes how and in what way should competence be assessed? In the absence of clear guidance from the ICO or data protection legislation itself, some data controllers have adopted the notion of Gillick competence as the standard through which they assess child competence in data protection contexts.Footnote23 The subsequent section of the article explains the origins and composition of this concept.

3. Gillick v West Norfolk and Wisbech area health authority [1985] 3 All ER 402

The Secretary of State has powers under a number of primary legislation provisions that relate to the National Health Service (NHS) which includes issuing directions that are legally binding and must be followed by the Health Authorities and Trusts. It was under this delegated power that, in December 1980, The Department of Health and Social Security (DHSS) issued a revision to Health Service Circular (Interim Series) (H.S.C. (I.S.) 32)Footnote24 detailing the development of a family planning service within the National Health Service and specific guidance was provided (Memorandum G) detailing the remits within which family planning clinics could operate.Footnote25 The memorandum stressed that a doctor will always try to work with a child patient to involve their parent/guardian but in circumstances where this was not possible then ultimately doctors would have the clinical discretion to prescribe contraceptives without the consultation or knowledge of a person with parental responsibility. The memorandum also required the prescribing doctor to maintain the child’s confidentiality.Footnote26

It was the guidance contained in the memorandum that the claimant, Mrs Gillick, objected to and she brought an action to seek a declaration that the advice provided would adversely affect parental rights and was unlawful. The crux of the argument presented was that children, due to their inability to appreciate the gravity and consequences involved, could not validly consent to medical treatment without the knowledge or consent of their parents. Ergo, doctors who followed the memorandum’s advice, and allowed minors to consent to medical treatment without parental involvement would necessarily be acting unlawfully. In the High Court Mr Justice Woolf dismissed Mrs Gillick’s case, however the Court of Appeal found in favour of Mrs Gillick and the case (allowing for the defendant’s appeal) proceeded to the House of Lords, the issue at the centre of the case being if a doctor, in any circumstances, can lawfully give medical advice and/or treatment to a child under the age of 16 in the absence of their parent(s) consent.

By a majority of 3-2 (Lord Brandon and Lord Templeman dissenting) the House of Lords held that the DHSS guidance was lawful, with Lord Scarman (speaking for the majority) proclaiming that a child’s age should be no impediment to their ability to consent to medical treatment so long as they are competent to understand the nature and consequences of such a decision:

“ … a minor’s capacity to make his or her own decision depends on the minor having sufficient understanding and intelligence to make the decision and is not to be determined by reference to any judicially fixed age limit.”Footnote27

Lord Fraser was of a similar view, stating that it was:

“ … verging on the absurd to suggest that a girl or a boy aged 15 could not effectively consent, for example, to have a medical examination of some trivial injury to his body or even to have a broken arm set … Provided the patient … is capable of understanding what is proposed, and of expressing his or her own wishes, I see no good reason for holding that he or she lacks the capacity to express them validly and effectively … ”.Footnote28

In a similar vein, in relation to parental rights, Lord Fraser observed that ‘parental rights to control a child do not exist for the benefit of the parent. They exist for the benefit of the child, and they are justified only in so far as they enable the parent to perform his duties towards the child.’Footnote29

Both Lord Scarman and Lord Fraser, therefore, with the agreement of Lord Bridge, concluded that provided a child is deemed to sufficiently understand and possess the necessary intelligence to comprehend the nature and implications of medical treatment, they will be legally competent to consent to said treatment regardless of their age without parental knowledge or involvement. However, whilst both Lord Scarman and Lord Fraser agreed on the general matter of competence, they set out slightly differing tests for how competency should be assessed in such contexts. Lord Fraser articulated a set of guidelines which were to be used specifically in situations regarding the provision of sexual health advice and treatment to children. Conversely, Lord Scarman set out a broader and more general test for assessing child maturity and their ability to understand the implications of their decisions, which has become known as “Gillick competency”. This test requires that a child can consent to medical treatment so long as they “understand fully” not only the nature of the advice that they are being given but, with a level of maturity, also understand what is involved and the consequences of a prospective decision. If a child can fulfil these criteria, then they will be deemed to have the capacity to make an informed decision without the need for parental consent.Footnote30 It will be for individual doctors and clinicians to perform competency assessments on a case-by-case basis. As noted above, this test for competency is now widely referred to as “Gillick competence” and has become an integral and widely used aspect of medical law and family law.Footnote31 As also noted above, however, it is a test for competency that some data controllers have begun to use to assess child competency in matters relating to data protection.

4. Evaluating the application of Gillick competence in the data protection context

As set out in the previous section, under the Gillick ruling the “ability to understand” serves as the threshold for the capacity or competence for a child to decide. The Gillick standard therefore rejects the idea that there should be a blanket rule regarding when and in what circumstances a child can be deemed competent. When, whether, and in what circumstances a child can be deemed to be competent will vary from case to case. Given the absence of any other guiding authority on this matter, it seems plausible that Gillick could become the prominent standard through which competency in data protection contexts is assessed. As mentioned above, this is not merely a paper possibility; there are already examples of data controllers using Gillick in this way, particularly in the public sector. Under this standard, therefore, a judgement of whether a child can exercise their data protection rights and interests must be based on assessment of their ability to appreciate the possible implications of them doing so, not their age. For reasons set out below, however, it is far from certain that this approach to assessing competency is suitable for deployment in matters regarding data protection.

  1. The “full understanding” criterion

Plainly, the notion of Gillick competence hinges on the capacity of a child to understand the implications of their actions and potential decisions. Other case law has established that for a person to have the capacity to make a decision they must possess the ability to comprehend and retain the information necessary for making the decision, the ability to weigh and balance the benefits and risks of taking the decision, and the ability to arrive at a choice regarding the making of the decision.Footnote32 As noted elsewhere, it would appear that for a child to be competent according to the Gillick standard all three of these elements must be present.Footnote33 Concurrently, as noted above, in his judgment in Gillick Lord Scarman opined that for a child to be Gillick competent they must “understand fully” the nature of the decision they are presented with. This requirement was subsequently re-emphasised by Lord Donaldson in Re R,Footnote34 where he remarked that when assessing Gillick competence it will not be enough to assess whether a child understands the nature of the choice they are being asked to make, but whether they have a “full understanding and appreciation” of the decision’s prospective consequences, as well as the likely consequences of failing to take the decision. This, however, is a dubious standard for use in data protection contexts.

The reason for this being is simply that expecting children to be capable of attaining a “full understanding” of the implications of many data protection-related decisions will completely unrealistic. This is primarily because in many situations in which the personal data of children is processed (for example, in conjunction with the use of online services and applications), the processing question will be of an extremely complex nature that would be difficult for even adult persons to understand. This problem is further exacerbated by the way in which, again particularly online, privacy and data protection policies provided by data controllers are often written in complicated legal language that again is difficult to understand, even for adults.Footnote35 Achieving a “full understanding” of these policies, and thus of the processing activities to which they refer, would likely therefore require children to be able to effectively decipher reams of legal jargon. Whilst research has suggested that many children growing up in the information age (i.e. so-called “digital natives”) will have an aptitude for developing strategies and approaches to navigating some contemporary issues relating to data protection and privacy, a “full understanding” is surely an unrealistic level of attainment.Footnote36

In any event, even if these challenges regarding the complexity and comprehension of individual contemporary data processing practices could be successfully negotiated, this would not be the end of complications and challenges so far as a child developing a “full understanding” is concerned. This is again particularly true in the context of digital processing of personal data. In this regard, we must remember that the digital landscape, online platforms, and their associated personal data gathering and processing technologies continue to evolve rapidly. In this swiftly changing environment it is widely accepted that many adults struggle to stay fully informed, so expecting children to have a “full understanding” again is surely fanciful. In this regard it is also worth considering that in this environment some uses of personal data can have consequences that are not known, or cannot even be predicted, at the time the data are processed.Footnote37

Plainly, if the Gillick standard for competence requires children to develop a “full understanding” of the implications of decisions put before them, in many data protection contexts this will necessarily mean that children will never, and can never, be considered competent to exercise their data protection rights and interests. This is surely not what the ICO had in mind when recommending that children should be entitled to exercise their data protection rights so long as they are “competent” to do so.

b.

Understanding vs accountability

Another issue arising in conjunction with the use of the Gillick standard to measure child competence in the data protection context relates to the fact that a child being sufficiently mature to understand the implications of a prospective decision (e.g. the immediate significance and theoretical consequence of exercising a data protection right in a particular situation) does not necessarily equate to the same child being of sufficient maturity to bear the weight of the consequences of that decision should they occur. A useful starting point for considering the possible significance of this nuance is the judgment of Lord Scarman in the Gillick case. Of particular interest is his discussion of what is required for a child to understand contraceptive advice and treatment:

“ … [T]here is much that has to be understood by a girl under the age of 16 if she is to have legal capacity to consent to such treatment. It is not enough that she should understand the nature of the advice that is being given: she must also have sufficient maturity to understand what is involved. There are moral and family questions, especially her relationship with her parents; long term problems associated with the emotional impact of pregnancy and its termination; and there are risks to health of sexual intercourse at her age, risks which contraception may diminish but cannot eliminateFootnote38

Lord Scarman’s statement highlights the spectrum of issues and concerns that must be considered when determining whether a child can understand a decision put before them (i.e. whether they are competent). Moral, family, emotional and health-related concerns all plainly must form part of any such consideration. Though his Lordship was speaking of challenges faced in the context of medical decision-making, this spectrum is also relevant to decisions faced by children in data protection contexts. As noted by Boddington and Gregory, in this regard it is important to remember that the context-dependent approach to maturity emphasised by Gillick is not only dynamic, it is also multidimensional, and allows for a child to be considered mature (i.e. competent) in some aspects, or in respect of some prospective decisions, but not others.Footnote39 This gives rise to significant ethical questions associated with using the Gillick standard as a means of determining competency in both medical and data protection contexts. Specifically, it is arguable that, even if a child’s competence can be accurately measured by way of application of the Gillick standard (the practical challenges associated with performing such an assessment notwithstanding), the ability of a child to understand the implications, risks and potential consequences associated with a prospective decision does not equate to an ability to bear them should they occur. For example, in medical contexts, whilst a girl under the age of 16 may be deemed competent (as per Gillick) to be sufficiently mature to consent to receiving contraceptive advice or treatment (or the absence thereof), this is categorically not the same as recognising that a girl of this age would necessarily possess the maturity for parenthood.Footnote40

From this example we can sketch an analogous scenario that could conceivably arise in a data protection context. For instance, imagine that Child A decides that she wishes for her genetic information (i.e. personal data) to be collected and analysed for medical research purposes (e.g. as part of a biobanking initiative) and, having been made aware of the potential risks and consequences of participating, is deemed competent to give consent, as per Gillick, to her personal data being processed for such purposes.Footnote41 Child A, in this case, may understand the immediate implications of her decision (e.g. contributing to scientific research and potentially advancing medical knowledge). However, the long-term consequences may involve the development of novel medical treatments or the identification of genetic predispositions to certain, possibly chronic, illnesses.Footnote42 Though Child A may comprehend (as per Gillick) her involvement at the time she decides to participate, the full impact of her participation might not become apparent until later in her life when any associated research yielded tangible results. For example, the biobank’s research could lead to the identification of genetic markers associated with a severe degenerative health condition, and Child A (who may possibly still be a child at the time of this revelation) discovering she has an elevated risk for that condition. Child A could then experience severe emotional distress and trauma at having learned this information, leading to the deterioration of her mental health.Footnote43 This example illustrates how a child’s competence (as per Gillick) to consent to their personal data being processed as part of a biobanking initiative does not automatically equate to possessing the necessary robustness to bear the long-term consequences of doing so.

Similar observations could also be made in respect of a competent child’s decision to withdraw their personal data from a biobanking initiative, having previously expressed a desire to contribute. For example, imagine that Child A consents to her personal data being processed as part of a biobanking initiative, but at a later date, whilst still a child, she decides that she wishes to exercise her right to erasure and/or her right to restrict processing in relation to this use of her personal data (i.e. she wishes to end her participation in the scheme and for her personal data to be removed). In such a situation, the fact that Child A is deemed competent, as per Gillick, to exercise these rights does not inherently equate to Child A possessing the cognitive and emotional maturity to handle the long-term potential consequences of doing so. A decision to exercise her right to erasure or right to restrict processing may well be within the understanding of Child A, but the potential long-term consequences are complex and may extend beyond her immediate comprehension. For example, though the exercise of these rights in this context may align with what are perceived to be Child A’s data protection interests, the potential consequence of her genetic data being removed from the biobanking initiative is her data no longer contributing to medical research within the biobank. This could, in turn, could result in Child A experiencing limited access to personalised medical treatments that are eventually developed or improved through the biobank’s analysis of aggregated genetic data, including potential advancements in understanding genetic conditions, identifying targeted therapies, or developing more effective medical treatments and medicines. This example highlights once again, therefore, how relying solely on a child’s competence (as per the Gillick standard, at least) may not adequately consider the broader and potentially enduring implications for a child’s wellbeing.

Just as the example of the girl being given (or not given) contraceptive advice highlights the distinction between her competence to consent to medical advice and her readiness for parenthood, the data protection biobanking example emphasises the difference (and possible gulf) between a child’s competence to understand the notion of exercising their data protection rights and their capacity to manage relationships and make informed decisions about sharing personal data, and their ability to bear the weight of the possible consequences of these decisions should they make them. What these examples show, therefore, is the way in which Gillick does not involve a consideration of a child’s ability to bear the consequences of a prospective decision, but only their ability to understand any prospective risks. This has at least two notable implications. First, the application of the Gillick standard to matters of data protection could plausibly create situations in which an unrealistic and problematic set of expectations are imposed on a child, burdening them with responsibilities they are not ready to carry. Second, the application of the Gillick standard in data protection contexts could result in children being permitted to make decisions that are ultimately not in their long-term best interests and/or decisions that may be harmful to their well-being. This is a broad criticism of the Gillick standard, and not one that is necessarily specific to its use in matters relating to data protection. It does, however, raise further questions as to whether it would be suitable for deployment in data protection contexts.

c.

Inequality and the digital divide

Over the last few decades much scholarship has been dedicated to the study of digital divides (i.e. divisions and inequalities caused by differing levels of access to digital information, tools and resources, and the consequences thereof.).Footnote44 The impact of digital inequalities on marginalised and vulnerable children, particularly those from disadvantaged backgrounds, has been identified as a significantly important policy issue by various major international organisations,Footnote45 and following the COVID-19 pandemic has increasingly gained the attention of other observers.Footnote46 For several reasons, the imposition of Gillick competence as a requirement for children to exercise their data protection rights and interests would seemingly have the potential to exacerbate existing inequalities of this sort.

At the root of this possibility is the fact that children from disadvantaged backgrounds will often not share the same access to education, resources, and technological tools and devices that children from non-disadvantaged backgrounds enjoy.Footnote47 One possible consequence of this is the emergence of a knowledge deficit in matters pertaining to data protection, a field which is generally thought to be inaccessible, complex, and poorly understood by those not already possessing at least some knowledge of the area.Footnote48 If a Gillick competence standard was to be adopted as a threshold for the point at which children were entitled to exercise their data protection rights and interests, and applied without consideration of these disparities, it could effectively sideline children from disadvantaged backgrounds and deny them the sovereignty to influence how their personal data are used by others.

In a similar vein, the adoption of the Gillick competence standard would potentially have an even greater impact on children with disabilities and learning difficulties. It is widely known, for example, that physical disabilities can exacerbate the above-mentioned issues pertaining to lack of access to educational resources and tools, whereas learning difficulties can significantly impact a child’s ability to comprehend complex topics (e.g. data protection) and make informed decisions.Footnote49 Any competency requirement that fails to consider this nuance, and expects children with learning difficulties to meet the same threshold as their neurotypical peers could, therefore, potentially result in the emergence of systemic discriminatory practices and further deepen the vulnerability of such children in data protection contexts. These inequities would again likely result in the reinforcement of the digital divide which would, in turn, perpetuate and deepen existing social inequalities.

d.

Practical issues

In addition to the various conceptual and ethical problems set out above, data controllers would also conceivably face significant practical challenges when attempting to assess competence via the Gillick standard. Most obviously, as alluded to above, Gillick assessments are widely regarded as being complicated to perform. This is particularly the case given how Gillick recognises maturity as a multidimensional and dynamic concept. Such is the complexity of performing Gillick competency assessments, it has been suggested that even expert clinicians may often struggle to do so effectively.Footnote50 Moreover, difficulties in the complexity of performing Gillick assessments notwithstanding, a full determination of a child’s competence via the Gillick standard is not something that can be arrived at quickly or cursorily. As noted elsewhere, for Gillick assessments to be undertaken properly sufficient time must be given to evaluate a child’s level of understanding.Footnote51 Therefore, in addition to concerns regarding the complexity of performing Gillick competency assessments, time constraints and resourcing limitations may also act as an impediment to the effective achievement of this task. Again, this is particularly likely to be the case in the context of public sector data controllers, who are already known to suffer from a lack of resourcing in matters of data protection.Footnote52 Requiring data controllers who do not interact face-to-face with data subjects (e.g. online services providers) to perform Gillick competency assessments would also obviously be hugely impractical.

It may also be the case that in reality many data controllers would, in any event, lack consistent or long-standing experience of working with children, or even knowledge and experience in matters of data protection. Such a lack of experience and specialist knowledge would plainly exacerbate the above-mentioned challenges with performing Gillick competency assessments, even if resourcing was no issue. In this sense it is also important to keep in mind that the influence of a data controller, when attempting to undertake a Gillick assessment (e.g. their style of address, tone, manner, and enthusiasm) would be capable of having a significant impact on the choice of a child data subject, which could further impact their decision-making capabilities. As noted elsewhere, this is particularly important in the context of adolescent data subjects, who will be going through a stage of psychological development and may be experiencing significant social pressure (e.g. peer pressure at school or work), which may also have implications for their choices.Footnote53 One obvious inference stemming from this potential for coercion and other factors that may erode a child’s autonomy is that, in some contexts at least, it may be unwise to allow a child to assume full decision-making sovereignty, and how it will be important for persons with parental responsibility to retain some influence on the decision-making process.

What these issues show is that from a purely practical perspective, irrespective of the abovementioned ethical challenges also at play, it will be effectively impossible for data controllers to perform meaningful Gillick competency assessments in relation to child data subjects in many situations.

5. Alternative possible ways forward

As set out above, competency has seemingly been identified by the ICO as the threshold at which a child is entitled to exercise their data protection rights and interests. Pursuant to this the ICO explicitly invites data controllers to undertake their own assessments as a means of testing the competency of child data subjects. One way through which this could be done, and one way that some data controllers have begun to use, is to assess competency via the notion of Gillick competence. As argued in the preceding section of the article, however, there are several major reasons why the Gillick criteria would potentially be unsuitable for assessing child competence in matters relating to data protection. If, however, as the ICO advises, competence is to be the threshold at which children are entitled to exercise their data protection rights and interests, the question becomes what alternatives to Gillick exist though which competence could be assessed and/or competency-related challenges could be negotiated? This section of the article outlines several non-Gillick means through which data controllers could attempt to assess child competence in data protection contexts, and/or otherwise circumvent Gillick’s inherent limitations, and considers some of their possible strengths and weaknesses.

  1. “Better” data protection policies and increased reliance on educational initiatives

A common complaint of data controllers’ data protection policies that has persisted for decades is that they are frequently written in protracted and inaccessible language that is difficult to understand. This is particularly the case in respect of data controllers acting predominantly in online environments.Footnote54 Plainly, such practices will be capable of impeding the ability of children to develop an understanding of how their personal data might be used as part of any prospective processing operation. Against this background it is worth noting that, as set out above, the recitals of the UK GDPR already touch upon the importance of data use policies being written in plain and accessible language that is capable of being understood by children when their personal data are to be collected and processed.Footnote55 It has occasionally been suggested, however, that the introduction of specific legal provisions that compelled data controllers to provide “better” data use policies would help to enhance data subjects’ understanding of data protection issues, and thereby develop their competence to act in relevant environments.Footnote56 Legal rules of this sort might, for instance, impose a limit on the number of characters or words that a data use policy aimed at children, so as to reduce their complexity.

The use of an approach in this vein could in theory provide a more comprehensive and effective approach to safeguarding the data protection interests of children than reliance on Gillick competence in numerous ways. Most obviously, by compelling data controllers to present information a necessarily uncomplex format would potentially reduce the need for complex assessments of child competence, as the information itself would necessarily be more user-friendly. Such an approach would, therefore, possibly dissolve many of the difficult issues set out above that are inherent in requiring data controllers to undertake their own competency assessments when engaging with children. Moreover, if the law were to establish specific standards regarding the content and format of data use policies aimed at children this could help to ensure that all children received consistent and age-appropriate information. This standardisation could, therefore, eliminate possible discrepancies in the quality of information that children and their parents receive from different data controllers. To this end, it is worth noting that empirical research into the use of simplified data use policies has suggested that their use in some contexts can result in readers developing a higher level of knowledge of a data controller’s data handling practices, thereby highlighting the potential merits of a legally mandated “better policy” approach.Footnote57 Such an approach could also be complemented by the rollout and reliance on educational data protection literacy initiatives aimed specifically at children. Initiatives of this sort, perhaps unsurprisingly, have been shown to be a highly effective way of improving a child’s grasp of contemporary data-handling practices.Footnote58

Attempting to address the problems associated with Gillick by phasing out requirements for data controllers to undertake their own competency assessments by way of placing greater reliance on supposedly more effective data use policies and educational initiatives, however, would itself be problematic. Most obviously, given the above-mentioned complexity inherent in many contemporary data processing activities, it would be doubtful as to whether data protection policies could always be presented in ways that were both simple and capable of providing them with a “full understanding” due to important nuances potentially being omitted in a simplified privacy notice. Concurrently, legal rules that compelled the provision of simplified data protection policies may have the adverse effect of shackling and curbing data controllers as to what they were able to communicate to data subjects, which could further lead to the omission of important information.

Furthermore, if the law were to place a greater emphasis on education and data controllers providing “better” data use policies and remove the need for data controllers to undertake their own competency assessments, this would shift the burden for ensuring good practice in data protection contexts away from data controllers themselves and onto children themselves – an ethically dubious proposition. Alleviating data controllers of any responsibility to ascertain the competence of children to understand the implications of exercising their data protection rights and interests, for example, would necessarily presuppose that it should be children themselves that should de facto shoulder the responsibility for safeguarding their own data protection interests, rather than data controllers being responsible for ensuring their own good practice. This would seemingly be incompatible with the underlying ethos of existing data protection law which, as mentioned above, explicitly emphasises the idea that children deserve special protection from the law and should not be left to fend for themselves.

b.

A standardised system of “graduated” or “tiered” competence

Another alternative to Gillick competence would be the introduction of a system of standardised graduated or tiered competence. Under such an approach, a series of standardised competence levels could be devised according to the average cognitive development of a child of a certain age. Under such an approach the competence of a child would be judged on a sliding scale according to their age (i.e. the older the child, the more competent they would be presumed to be, the existence of any significant compromising factors notwithstanding). For example, under such a system younger children might be deemed competent to exercise their data protection rights and interests in relation to data processing activities that were deemed to be simpler or “low risk” (e.g. participation in online gaming activities), but would not be deemed competent to do so in relation to more complex or “high risk” forms of data processing with potential long-term consequences (e.g. sharing personal data for purposes relating to medical research/biobanking initiatives) until they were several years older. An approach in this vein would recognise that children mature at different rates, and would categorise them into age or developmental stages, allowing for age-appropriate decision-making, and would provide a graduated learning curve that would let children take on greater responsibility for data protection-related decisions in accordance with their cognitive development.Footnote59 Concurrently, the existence of such a system in a standardised form would also eliminate the need for data controllers to perform their own complicated and time consuming competency assessments.

Nevertheless, though a standardised system of tiered or graduated competence would perhaps provide consistency and clarity for data controllers, an approach to competence in this mould would likely be beset by numerous complications. The first of these issues would likely arise in the design stage due to difficulties in defining clear criteria for each standardised age group or developmental stage. This is mainly due to the fact that, due to the way in which, as has long been known, the process and speed of emotional and cognitive development differs greatly from child to child and is affected heavily by external environmental and cultural factors.Footnote60 As a result, any age-based markers for cognitive development utilised as part of a tiered competency system would necessarily fail to accommodate the crucial nuance that every child is different. With this being the case, any determination of competency made under a standardised system of tiered or graduated competence would necessarily be arbitrary. Difficulties would also likely emerge when attempting to devise a list of data processing activities that were “high” or “low” risk, due to the nebulous nature of the concept of risk itself. This is a challenge in the data protection field that has already been well-traversed in the literature.Footnote61 Furthermore, such an approach would also give rise to the possibility of underestimating the competence of younger children or overestimating the competence of older children, leading to either to their decision-making abilities being unnecessarily restricted or prematurely indulged. For all its possible faults, this is not a criticism that can be levelled at the Gillick approach to assessing competency.

c.

Interactive decision support tools

Another alternative means through which child competence could be assessed and given effect in data protection contexts, particularly in online environments, would be the legally mandated use of interactive decision-making tools (i.e. digital applications or systems which provide reciprocal tools for helping people make informed decisions regarding matters of data protection). Notable examples of such tools include online games, such as those developed by organisations such as the family-friendly technology and entertainment company, Common Sense Media, which are designed to teach children about matters relating to uses of personal data and making safe choices regarding personal data sharing.Footnote62 In a similar vein, privacy and data protection simulations (i.e. applications that allow children to experience the dynamics of information flows and data streams through realistic simulated scenarios) represent another example of such a tool.Footnote63

The use of tools of this sort in data protection contexts are thought to be useful due to their potential to engage children in a fun and informative manner, making learning about data protection accessible and enjoyable. They also often provide immediate feedback, allowing children to gauge their understanding and, in so doing, allow them to identify areas where further improvement or development is needed, providing them with a personalised learning experience. Not only this, but the use of such tools has been shown to be an effective way of facilitating longer parent–child discussions and raising long term awareness of privacy and data protection-related matters.Footnote64 As such, they represent a prima facie promising way of enhancing child competence in matters of data protection. Against this background, it is conceivable that the law could play a key enabling role by mandating their use in certain contexts. For example, legal rules could specify that tools of this sort must be engaged with and successfully negotiated prior to children’s personal data being gathered and/or children’s data protection rights and interests being exercised. Such an obligation could be read alongside, and in conjunction with, Article 25 of the UK GDPR which already imposes upon data controllers an obligation to implement appropriate technical and organisational measures that are intended to give effect to data protection rules and principles into their data processing activities.

There would, however, be likely drawbacks to the use of such an approach. Most obviously, it would likely be far easier and practicable to build-in and incorporate interactive support tools in online environments than in offline environments. In situations involving the personal data of children being sought, or children attempting to exercise their data protection rights and interests, in offline situations (e.g. a child making a data subject access request to a local council by way of a hand-written letter) it is far from clear how such tools could be effectively implemented. In any event, and regardless of the nature of the environment in which they are deployed, the effectiveness of tools of this sort would depend significantly on their quality and design. In the event they were not designed appropriately, or with relevant considerations in mind, there is the potential for them to oversimplify or possibly gamify important data protection concepts, leading to their users developing a superficial, incomplete or inaccurate understanding of important data protection issues (i.e. the opposite of their intended purpose). Alongside this is the possibility that legally mandating the use of these tools could lead to children experiencing decision fatigue when presented with too many options or considerations or being asked too frequently to engage with such tools themselves, potentially leading to choice paralysis.Footnote65 Another potential difficulty is the prospect of children viewing these sorts of tools as mere recreational games and puzzles, thereby failing to appreciate their educational value and content. For these reasons, and others, doubts have been expressed in relation to the extent to which the adoption of these tools can genuinely lead to the development of heightened levels of understanding of data protection-related matters.Footnote66

6. Conclusion

This article considered the notion of competence, and how it serves as a threshold for children exercising their data protection rights and interests. To be clear, the aim of the article was not to make a general argument that no child can ever be competent to make decisions in matters of data protection. Instead, the aim of the article was to highlight how the concept of competence itself is poorly understood in this context. Against this background, the article did the following things. First, it was highlighted how in its guidance the UK ICO draws specific attention to the concept of competence, and how children are entitled to exercise their data protection rights and interests if they are “competent” to do so. It was then explained how despite being given a key role, the notion of competence in this context is poorly understood, with no definition or clarification being provided by legislation, case law, or regulatory guidance. With this being the case, the article then considered how some data controllers, in lieu of any concrete guidance on how child competence should be measured, have started to use Gillick competence as the means through which they assess child competence in data protection contexts. The article’s subsequent analyses then demonstrated that there are numerous problems that may arise from using the Gillick standard to assess a child’s competence in the context of exercising their data protection rights and interests, raising doubts as to the suitability of its deployment in matters of data protection. Some possible alternatives to Gillick competence were then also examined. Whilst there were potential merits to all the alternatives considered, there were also notable possible limitations and drawbacks. The upshot of these findings, therefore, is that if competency is to continue to act as the threshold at which data protection rights and interests become exercisable by children there is clearly a need to revisit and refine this concept, and indeed how it can be meaningfully and practically assessed.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 For example, a key driver behind the recent enactment of the Online Safety Act 2023 was the desire to protect children from harms arising from uses of digital technologies and participation in digital environments.

2 S Livingstone, M Stoilova and R Nandagiri, ‘Children’s data and privacy online: Growing up in a digital age. An evidence review.’ [2019] London School of Economics and Political Science.

3 (Retained EU Legislation) Regulation (EU) 2016/679 (United Kingdom General Data Protection Regulation) (UK GDPR). Available at: https://www.legislation.gov.uk/eur/2016/679/contents

5 For example, the United Nations Convention on the Rights of the Child, specifically requires that the evolving capacities of children, including their ability to exercise autonomy and make decisions, is respected where appropriate.

6 See, for example: E Lievens and V Verdoodt, ‘Looking for needles in a haystack: Key issues affecting children’s rights in the General Data Protection Regulation’ [2018] 34(2) Computer Law & Security Review 269–78; J C Buitelaar, ‘Child’s best interests and informational self-determination: what the GDPR can learn from children’s rights’ [2018] 8(4) International Data Privacy Law 293–308; K La Fors, ‘Legal Remedies for a Forgiving Society: Children’s rights, data protection rights and the value of forgiveness in AI-mediated risk profiling of children by Dutch authorities’ [2020] 38 Computer Law & Security Review 105430; E Nottingham, C Stockman and M Burke. ‘Education in a datafied world: Balancing children’s rights and school’s responsibilities in the age of Covid 19’ [2022] 45 Computer Law & Security Review 105664.

7 See, for example: M Taylor et al, ‘When can the Child Speak for Herself? The Limits of Parental Consent in Data Protection Law for Health Research’ [2018] 26(3) Medical Law Review 369–91.

8 Article 4(1) UK GDPR defines personal data as “any information relating to an identified or identifiable natural person”.

9 Information society services are defined by Article 1(1)(b) Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services (codification) as “any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services.” See also: Section 9 Data Protection Act 2018.

10 Article 15 UK GDPR.

11 Article 16 UK GDPR.

12 Article 17 UK GDPR.

13 See; Recital 65 UK GDPR and ICO (n3) 42.

14 Ibid.

15 Ibid, 16.

16 Article 4(11) UK GDPR specifies that a data subject’s consent will only be valid where it is freely given, specific, informed and unambiguous. Article 7 of the UK GDPR sets out further requirements that must be satisfied for a data subject’s consent to be valid.

17 ICO (n 4) 17.

18 Ibid, 41.

19 Ibid.

20 Ibid, 16.

21 Ibid, 17.

22 A Kolber, ‘Smoothing Vague Laws’ in G Keil and R Poscher (eds), Vagueness and Law: Philosophical and Legal Perspectives (OUP 2016).

23 For example, Essex County Council and Barnet London Borough Council have both adopted guidelines which establish Gillick competence as a threshold for children exercising their data protection rights and interests. See: Essex County Council [2023] Subject Access: A Parent’s Guide (available at: https://www.essex.gov.uk/sites/default/files/migration_data/files/assets.ctfassets.net/knkzaf64jx5x/7yENkAuTZlNvUmizglaiD9/3e1b8e1f7df3dc6ffd837e8591abffa5/Parental-Guardian_Guide_to_Service_access_request.pdf); Barnet London Borough Council [2015] Barnet Partnership Information Sharing Protocol (available at: https://www.barnet.gov.uk/sites/default/files/assets/citizenportal/documents/councilanddemocracy/2015JulyInformationSharingProtocol.PDF).

24 Health Service Notice (H.N. (80) 46).

25 Gillick v. West Norfolk and Wisbech Area Health Authority and Another [1982 G. No. 2278] [1984] Q.B. 581 at 588

26 Health Services Management. Family planning services for young people. Advance copy LASSL(81)2. Later issued as HN(81)5. WHN(81)5 Department of Health and Social Security, 1980.

27 Gillick v West Norfolk and Wisbech Area Health Authority [1985] 3 All ER 402 at 421.

28 Ibid, 409.

29 Ibid, 410.

30 Ibid, 423–24.

31 Despite having become widely used in medical and other contexts, the application of Gillick competence has not been without controversy. Due to various ethical challenges associated with the concept some observers, for example, have described it as having endured a “tortured history”. See: N Lennings, ‘Forward, Gillick: Are competent children autonomous medical decision makers? New developments in Australia’ [2015] 2(2) Journal of Law and the Biosciences 459–68.

32 Re C (Adult: Refusal of Treatment) [1993] 1 FLR 31.

33 P Boddington and M Gregory, ‘Adolescent Carrier Testing in Practice: The Impact of Legal Rulings and Problems with “Gillick Competence”’ [2008] 17 Journal of Genetic Counselling 509–21.

34 Re R (A minor) (Wardship: Consent to treatment) [1991] 3 WLR 592.

35 A Hanlon and K Jones, ‘Ethical concerns about social media privacy policies: do users have the ability to comprehend their consent actions?’ [2023] Journal of Strategic Marketing.

36 See: J Zhao et al, ‘’I make up a silly name’: Understanding Children’s Perception of Privacy Risks Online’ [2019] 106 CHI’19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 1–13; S Livingstone, L Haddon and A Görzig, Children, Risk and Safety on the Internet: Research and Policy Challenges in Comparative Perspective (Policy Press 2012).

37 V Mayer-Schonberger and K Cukier, Big Data: A Revolution That Will Transform How We Live, Work and Think (John Murray 2013).

38 Gillick v West Norfolk and Wisbech Area Health Authority [1985] 3 All ER 402 at 424.

39 P Boddington and M Gregory (n 33).

40 Ibid. See also: E Cave, ‘Goodbye Gillick? Identifying and resolving problems with the concept of child competence’ [2014] 34(1) Legal Studies 103–22.

41 For an overview of some notable data protection and privacy-related risks arising in the biobanking context, see: T Kasperbauer et al, ‘Communicating Identifiability Risks to Biobank Donors [2018] 27(1) Cambridge Quarterly of Healthcare Ethics 123-136; M Morrison et al, ‘The European General Data Protection Regulation: challenges and considerations for iPSC researchers and biobanks [2017] 12(6) Regenerative Medicine 693–703.

42 S Jurgens et al, ‘Analysis of rare genetic variation underlying cardiometabolic diseases and traits among 200,000 individuals in the UK Biobank’ [2022] 54(3) Nature Genetics 240–50.

43 Research has shown that revelations of this sort, following the analysis of genetic data, can in some instances have negative psychological impacts on affected persons. See: S Sanderson et al, ‘Psychological and behavioural impact of returning personal results from whole-genome sequencing: the HealthSeq project’ [2017] 25(3) European Journal of Human Genetics 280–92.

44 See, for example: J van Dijk, The Digital Divide (Polity 2020).

45 See, for example: UNICEF, Closing the Digital Divide for Good: An end to the digital exclusion of children and young people in the UK (UNICEF 2021), available at: https://www.unicef.org.uk/wp-content/uploads/2021/06/Closing-the-Digital-Divide-for-Good_FINAL.pdf

46 See, for example: G Watts, ‘COVID-19 and the digital divide in the UK’ [2020] 2(8) The Lancet.

47 UNICEF (n 45).

48 See: M Albers, Realizing the Complexity of Data Protection. in Gutwirth and others (eds), Reloading Data Protection (Springer 2013) 213–35; L Pleger, K Guirguis and A Mertes, ‘Making public concerns tangible: An empirical study of German and UK citizens’ perception of data protection and data security’ [2021] 122 Computers in Human Behaviour 106830.

49 M Duplaga, ‘Digital divide among people with disabilities: Analysis of data from a nationwide study for determinants of Internet use and activities performed online’ [2017] 12(6) PLoS ONE; T Wu et al, ‘Is digital divide an issue for students with learning disabilities?’ [2014] 39 Computers in Human Behaviour 112–17.

50 J Brierley and V Larcher, ‘Adolescent autonomy revisited: clinicians need clearer guidance’ [2016] 42(8) Journal of Medical Ethics 482–85; C Fenton, ‘Is consent causing confusion for clinicians? A survey of child and adolescent Mental Health professional’s confidence in using Parental Consent, Gillick Competence and the Mental Capacity Act’ [2020] 25(4) Clinical Child Psychology and Psychiatry 922–31; D Hunter and B Pierscionek. ‘Children, Gillick competency and consent for involvement in research’ [2007] 33(11) Journal of Medical Ethics 569–662.

51 R Griffith, ‘What is Gillick competence?’ [2016] 12(1) Human Vaccines and Immunotherapeutics 244–47.

52 H Pearce. ‘A proposal for a new risk-based licensing approach to disclosing anonymised data under the (UK) Freedom of Information Act 2000 [2021] 30(2) Information & Communications Technology Law 108–39.

53 N Zimmerman, ‘Gillick Competence: An Unnecessary Burden’ [2019] 25(1) A Multidisciplinary Journal of Biotechnology and the Body.

54 C Jensen and C Potts, ‘Privacy policies as decision-making tools: an evaluation of online privacy notices’ [2004] Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 471–78.

55 Recital 58 UK GDPR.

56 See: C Ciochetti, ‘E-Commerce and Information Privacy: Privacy Policies and Personal Information Protectors’ [2007] 44(55) American Business Law Journal 110–26.

57 Y Meier, J Schäwel and N Krämer, ‘The Shorter the Better? Effects of Privacy Policy Length on Online Privacy Decision-Making’ [2020] 8(2) The Politics of Privacy: Communication and Media Perspectives in Privacy Research.

58 L Desimpelaere, L Hudders and D Van de Sompel, ‘Knowledge as a strategy for privacy protection: How a privacy literacy training affects children’s online disclosure behaviour’ [2020] 110 Computers in Human Behaviour 106382; E Vanderhoven, T Schellens and M Valcke, ‘Changing Unsafe Behaviour on Social Network Sites. Collaborative Learning vs. Individual Reflection’ in Walrave and others (eds), Youth 2.0: Social Media and Adolescence (Springer 2016) 211–26.

59 To this end, a tiered competency model would bear resemblance to models of tiered consent that have been proposed for use in other settings involving collections and uses of personal data. See: H Kim et al, ‘iCONCUR: informed consent for clinical data and bio-sample use for research’ [2017] 24(2) Journal of the American Medical Informatics Association 380-387; E Bunnik et al, ‘A tiered-layered-staged model for informed consent in personal genome testing’ [2013] 21 European Journal of Human Genetics 596–601.

60 J Piaget, ‘Intellectual Evolution from Adolescence to Adulthood’ [1972] 15(1) Human Development 1–12; K W Fischer and L Silvern, ‘Stages and Individual Differences in Cognitive Development’ [1985] 36 Annual Review of Psychology 613–48.

61 See, for example: H Pearce, ‘Brexit and Data Protection Law: A Missed Opportunity for Innovative Reform?’ in E Celeste et al (eds), Data Protection and Digital Sovereignty Post-Brexit (Hart 2023) 35–58.

62 For example, Digital Compass by Common Sense Education is an online game that is designed to teach children about fundamental aspects of digital citizenship through interactive choose-your-own-path activities. Common sense, 'Ready to play Digital Compass?' (Digital Compass) <https://www.digitalcompass.org/> accessed 15 December 2023.

63 L Bioglio et al, ‘A Social Network Simulation Game to Raise Awareness of Privacy Among School Children’ [2019] 12(4) IEEE Transactions on Learning Technologies 456–69.

64 L Zhang-Kennedy, Y Abdelaziz and S Chiasson, ‘Cyberheroes: The design and evaluation of an interactive ebook to educate children about online privacy’ [2017] 13 International Journal of Child-Computer Interaction 10–18; P Kumar et al, ‘Co-designing online privacy-related games and stories with children’ [2018] Proceedings of the 17th ACM Conference on Interaction Design and Children 67–79.

65 On this issue, see: Y Liu et al, ‘The Effect of Privacy Fatigue on Privacy Decision-Making Behavior’ [2023] Proceedings of the Human Factors and Ergonomics Society Annual Meeting.

66 D Reinhardt, J Borchard and J Hurtienne, ‘Visual Interactive Privacy Policy: The Better Choice?’ [2021] Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems 1–12.