621
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Unlocking the black box: analysing the EU artificial intelligence act’s framework for explainability in AI

Pages 293-308 | Received 09 Sep 2023, Accepted 23 Oct 2023, Published online: 09 Feb 2024
 

ABSTRACT

The lack of explainability of Artificial Intelligence (AI) is one of the first obstacles that the industry and regulators must overcome to mitigate the risks associated with the technology. The need for ‘eXplainable AI’ (XAI) is evident in fields where accountability, ethics and fairness are critical, such as healthcare, credit scoring, policing and the criminal justice system. At the EU level, the notion of explainability is one of the fundamental principles that underpin the AI Act, though the exact XAI techniques and requirements are still to be determined and tested in practice. This paper explores various approaches and techniques that promise to advance XAI, as well as the challenges of implementing the principle of explainability in AI governance and policies. Finally, the paper examines the integration of XAI into EU law, emphasising the issues of standard setting, oversight, and enforcement.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 Stanford University, ‘Measuring trends in Artificial Intelligence’, (2023) AI Index Report, https://aiindex.stanford.edu/report/ (accessed 15 January 2024).

2 Fortune Business Insights, ‘Artificial Intelligence Market’ (2023) Market Research Report, www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-market-100114 (accessed 15 January 2024).

3 McKinsey, ‘The state of AI in 2022 – and a half decade in review’ (2022) Survey, www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review (accessed 15 January 2024).

4 Bernard Cohen, ‘Commentary: The Fear and Distrust of Science in Historical Perspective’ (1981) 6 Science, Technology, & Human Values 20; Marita Sturken, Douglas Thomas and Sandra Ball-Rokeach, Technological Visions: Hopes and Fears That Shape New Technologies (Temple University Press 2004).

5 This may be due to several factors, such as biased training data, data collection methods, feature selection, and feedback loops; Frederik Zuiderveen Borgesius, ‘Discrimination, Artificial Intelligence, and Algorithmic Decision-Making’ (2018) Council of Europe Study.

6 OECD, ‘Artificial Intelligence and Employment’ (2021) OECD Policy Brief; see also Accenture, ‘A New Era of Generative AI for Everyone’ (2023) Accenture Report. According to this report 40% of all working hours can be impacted by Large Language Models (LLMs) like GPT-4.

7 European Union Agency for Cybersecurity, ‘Artificial Intelligence Cybersecurity Challenges’ (2020) ENISA Report.

8 Nick Srnicek, ‘Platform monopolies and the political economy of AI’ in John McDonnell (ed) Economics for the many (Verso 2018); Pieter Verdegem, ‘Dismantling AI capitalism: the commons as an alternative to the power concentration of Big Tech’ (2022) AI & Society https://doi.org/10.1007/s00146-022-01437-8 (accessed 15 January 2024).

9 Jenna Burrell, ‘How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 1; Davide Castelvecchi, ‘Can We Open the Black Box of AI?’ (2016) 538 Nature 21; Warren von Eschenbach, ‘Transparency and the Black Box Problem: Why We Do Not Trust AI’ (2021) 34 Philosophy & Technology 1607.

10 Roger Brownsword and Alon Harel, ‘Law, Liberty and Technology: Criminal Justice in the Context of Smart Machines’ (2019) 15 International Journal of Law in Context 107; Abdul Malek, ‘Criminal courts’ Artificial Intelligence: The Way it Reinforces Bias and Discrimination’ (2022) 2 AI and Ethics 233; Michael Bücker and others, ‘Transparency, Auditability, and Explainability of Machine Learning Models in Credit Scoring’ (2022) 73 Journal of the Operational Research Society 70; Georgios Pavlidis, ‘Deploying Artificial Intelligence for Anti-Money Laundering and Asset Recovery: The Dawn of a New Era’ (2023) 26 Journal of Money Laundering Control 155.

11 Executive Office of the U.S. President, ‘Big data: a report on algorithmic systems, opportunity, and civil rights’ (2016) Executive Office of the President Report.

12 Regulation (EU) 2023/1114 of the European Parliament and of the Council of 31 May 2023 on markets in crypto-assets [2023] OJ L150/40; Georgios Pavlidis, ‘Europe in the Digital Age: Regulating Digital Finance without Suffocating Innovation’ (2021) 13 Law, Innovation and Technology 464.

13 Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector [2022] OJ L333/1.

14 European Commission, ‘Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act)’ (Communication) COM(2021) 206 final. The final text of the AI Act has been approved by the co-legislators (8 December 2023), but it has not been published yet in the Official Journal (January 2024).

15 Gonçalo Carriço, ‘The EU and Artificial Intelligence: A Human-Centred Perspective’ (2018) 17 European View 29; see also Paul Lukowicz, ‘The Challenge of Human Centric AI’ (2019) 3 Digitale Welt 9.

16 European External Action Service, ‘Shared Vision, Common Action: A Stronger Europe – A Global Strategy for the European Union’s Foreign And Security Policy’ (2016) www.eeas.europa.eu/sites/default/files/eugs_review_web_0.pdf (accessed 15 January 2024).

17 European Commission, ‘Fostering a European approach to Artificial Intelligence’ (Communication) COM(2021) 205 final.

18 See also European Commission, ‘Building Trust in Human-Centric Artificial Intelligence’ (Communication) COM(2019) 168; European Commission, ‘Fostering a European approach to Artificial Intelligence’ (Communication) COM(2021) 205; European Commission, ‘White Paper on Artificial Intelligence’ (Communication) COM(2020) 65 final.

19 European Commission, ‘Ethics Guidelines for Trustworthy AI, High-Level Expert Group on AI’ (2019) https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed 15 January 2024).

20 Lena Enqvist, ‘Human oversight’ in the EU artificial intelligence act: what, when and by whom?’ (2023) 15 Law, Innovation and Technology 508, https://doi.org/10.1080/17579961.2023.2245683 (accessed 15 January 2024).

21 OECD, ‘Recommendation of the Council on Artificial Intelligence’ (2019) OECD/LEGAL/0449, https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 (accessed 15 January 2024).

22 Ryan Budish, ‘AI’s Risky Business: Embracing Ambiguity in Managing the Risks of AI’ (2021) 16 Journal of Business & Technology Law 259.

23 Nathalie Smuha, ‘From a ‘race to AI’ to a ‘race to AI regulation’: Regulatory Competition for Artificial Intelligence’ (2021) 13 Law, Innovation and Technology 57; see the legislative initiatives on AI in Brazil (Projeto de Lei n° 2338, de 2023), in China (2021 regulation on recommendation algorithms; 2022 rules for deep synthesis; 2023 draft rules on generative AI), and in Canada (Draft law C-27, Digital Charter Implementation Act 2022, Part 3: Artificial Intelligence and Data Act). The United Kingdom does not plan to introduce sweeping new laws to govern AI, in contrast to the EU’s AI Act, but to strengthen the roles of existing regulatory bodies like the Information Commissioner’s Office, the Financial Conduct Authority, and the Competition and Markets Authority. These bodies will be empowered to provide guidance and oversee the use of AI within their respective areas of responsibility; UK Secretary of State for Science, ‘Innovation and Technology, A pro-innovation approach to AI regulation’ (2023) Policy Paper presented to the Parliament, www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper (accessed 15 January 2024).

24 Among numerous studies, see Christine Chinkin, ‘The Challenge of Soft Law: Development and Change in International Law’ (1989) 38 International & Comparative Law Quarterly 850; Kenneth Abbott and Duncan Snidal, ‘Hard and Soft Law in International Governance’ (2000) 54 International Organization 421; Bryan Druzin, ‘Why does Soft Law Have any Power Anyway?’ (2017) 7 Asian Journal of International Law 361.

25 Gregory Shaffer and Mark Pollack, ‘Hard Versus Soft Law in International Security’ (2011) 52 Boston College Law Review 1147.

26 Emer O'Hagan, ‘Too soft to handle? A reflection on soft law in Europe and accession states’ (2004) 26 Journal of European Integration 379; Jan Klabbers, ‘The Undesirability of Soft Law’ (1998) 67 Nordic Journal of International Law 381.

27 On these negotiations, see European Parliament Legislative Observatory, ‘Artificial Intelligence Act, 2021/0106(COD)’ (2023) https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2021/0106(COD)&l=en (accessed 15 January 2024).

28 According to the study, 'it should be made clear that controllers have best-effort obligations to provide data subjects with individualised explanations when their data are used for automated decision-making: these explanations should specify what factors have determined unfavourable assessments or decisions […] This obligation has to be balanced with the need to use the most effective technologies. Explanations may be high-level, but they should still enable users to contest detrimental outcomes’; European Parliament, ‘The impact of the GDPR on artificial intelligence’ (2020) Scientific Foresight Unit (STOA) Options Brief, PE 641.530.

29 This is an illustration of the concept known as the ‘Brussels effect’, which pertains to the EU’s independent ability to control global markets through regulations; see Anu Bradford, The Brussels Effect – How the European Union rules the world (Oxford University Press 2020).

30 Article 3(1) of the proposal defined AI system as ‘software that is developed with [specific] techniques and approaches [listed in Annex 1] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with’.

31 Council of the European Union, ‘Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – General approach’ Doc. 14954/22, 25 November 2022; according to the Council’s definition, AI systems are ‘systems developed through machine learning approaches and logic – and knowledge-based approaches’.

32 European Parliament, ‘Amendments adopted on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ COM(2021)0206 – C9-0146/2021–2021/0106(COD); according to the European Parliament’s proposal, ‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments’.

33 The final text of the AI Act has not been published yet in the Official Journal (15 January 2023).

34 Trade and Technology Council, ‘EU-U.S. Terminology and Taxonomy for Artificial Intelligence’ (2023) Annex A https://digital-strategy.ec.europa.eu/en/library/eu-us-terminology-and-taxonomy-artificial-intelligence (accessed 15 January 2024).

35 Jonas Schuett, ‘Defining the Scope of AI Regulations’ (2023) 15 Law, Innovation and Technology 60.

36 Michael Veale and Frederik Zuiderveen Borgesius, ‘Demystifying the Draft EU Artificial Intelligence Act’ (2021) 4 Computer Law Review International 97.

37 Risk-based regulation is no panacea; serious issues arise regarding justification and legitimation, along with risk-scoring, enforcement, and compliance; Robert Baldwin, Martin Cave and Martin Lodge, ‘Risk-based Regulation’ in Robert Baldwin and others (eds), Understanding Regulation: Theory, Strategy, and Practice (Oxford University Press 2011).

38 The key international standard-setter in this field, the Financial Action Task Force, summarized the advantages of this approach: ‘A risk-based approach involves tailoring the supervisory response to fit the assessed risks. This approach allows supervisors to allocate finite resources to effectively mitigate the […] risks they have identified and that are aligned with national priorities […] A robust risk-based approach includes appropriate strategies to address the full spectrum of risks, from higher to lower risk sectors and entities. Implemented properly, a risk-based approach is more responsive, less burdensome, and delegates more decisions to the people best placed to make them’; Financial Action Task Force, Risk-Based Supervision (FATF 2021).

39 Christian Meske and others, ‘Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities’ (2022) 39 Information Systems Management 53; these authors propose the following method to distinguish between the two concepts: when humans can comprehend the system’s logic and behaviours directly, without supplementary clarifications, the right term to use is ‘interpretable AI’. This might be perceived as an inherent trait of the system. However, if humans necessitate explanations as an intermediary to fathom the system's procedures, the field is termed as research on ‘explainable AI’.

40 OECD, ‘Recommendation of the Council on Artificial Intelligence’ (2019) OECD/LEGAL/0449, https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 (accessed 15 January 2024) (Principle 1.3 – Rationale).

41 On the XAI initiatives of the U.S. Department of Defense (Defense Advanced Research Projects Agency, DARPA) see David Gunning and others, ‘DARPA's explainable AI (XAI) program: A retrospective’ (2021) 2 Applied AI Letters, https://doi.org/10.1002/ail2.61 (accessed 15 January 2024).

42 For example, IBM has proposed a precision regulation framework, which also refers to the need for explainable AI: ‘Any AI system on the market that is making determinations or recommendations with potentially significant implications for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion. To achieve that, it is necessary for organizations to maintain audit trails surrounding their input and training data. Owners and operators of these systems should also make available – as appropriate and in a context that the relevant end-user can understand – documentation that detail essential information for consumers to be aware of, such as confidence measures, levels of procedural regularity, and error analysis’; see www.ibm.com/policy/ai-precision-regulation/ (accessed 15 January 2024).

43 See e.g. the Global Summit ‘AI for Good’, organized by the International Telecommunications Union, in partnership with 40 UN Agencies in 2023; https://aiforgood.itu.int/about-ai-for-good/ (accessed 15 January 2024).

44 In this context, the problem of inequity of access to XAI emerges; due to the costs of XAI, SMEs may be get left behind in this process and be priced out of XAI; see Jonathan Dodge, ‘Position: Who Gets to Harness (X)AI? For Billion-Dollar Organizations Only’ (2021) Joint Proceedings of the ACM IUI 2021 Workshops, https://ceur-ws.org/Vol-2903/IUI21WS-TExSS-5.pdf (accessed 15 January 2024).

45 Riccardo Guidotti and others, ‘A Survey of Methods for Explaining Black Box Models’ (2018) 51 ACM Computing Surveys 1.

46 Alex Freitas, ‘Comprehensible classification models: A position paper’ (2013) 15 ACM SIGKDD Explorations Newsletter 1.

47 An-phi Nguyen and María Rodríguez Martínez, ‘On Quantitative Aspects of Model Interpretability’ (2020) ArXiv, https://arxiv.org/abs/2007.07584 (accessed 15 January 2024); B Kim and others, ‘Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)’ (2018) 80 Proceedings of the 35th International Conference on Machine Learning 2668.

48 E.g. permutation importance, SHAP (SHapley Additive exPlanations), and LIME (Local Interpretable Model-agnostic Explanations).

49 Sejong Oh, ‘Predictive case-based feature importance and interaction’ (2022) 593 Information Sciences 155.

50 Riccardo Guidotti, ‘Counterfactual Explanations and How to Find Them: Literature Review and Benchmarking’ (2022) Data Mining and Knowledge Discovery, https://doi.org/10.1007/s10618-022-00831-6 (accessed 15 January 2024).

51 Riccardo Guidotti and others, ‘A Survey of Methods for Explaining Black Box Models’ (2018) 51 ACM Computing Surveys 1.

52 Thomas Rojat and others, ‘Explainable Artificial Intelligence (XAI) on Time Series Data: A Survey’ (2021) ArXiv, https://arxiv.org/abs/2104.00950 (accessed 15 January 2024).

53 OECD, ‘Recommendation of the Council on Artificial Intelligence’ (2019) OECD/LEGAL/0449, https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 (accessed 15 January 2024) (Principle 1.3 – Rationale).

54 European Commission, ‘Explanatory Memorandum, Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act)’ (Communication) COM(2021) 206 final, section 3.5.

55 Directive (EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure [2016] OJ L157/1.

56 Andrew Silva and others, ‘Explainable Artificial Intelligence: Evaluating the Objective and Subjective Impacts of XAI on Human-Agent Interaction’ (2023) 39 International Journal of Human–Computer Interaction 1390; Michael Chromik and Andreas Butz, ‘Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces’ in C Ardito and others, Human-Computer Interaction – INTERACT 2021 (Springer 2021).

57 Elena Benderskaya, ‘Nonlinear Trends in Modern Artificial Intelligence: A New Perspective’ in Jozef Kelemen, Jan Romportl and Eva Zackova (eds), Beyond Artificial Intelligence. Topics in Intelligent Engineering and Informatics (Springer 2013); Marina Vidovicet and others, ‘Feature Importance Measure for Non-linear Learning Algorithms’ (2016) ArXiv, abs/1611.07567 (accessed 15 January 2024).

58 Antoine Hudon and others, ‘Explainable Artificial Intelligence (XAI): How the Visualization of AI Predictions Affects User Cognitive Load and Confidence’ in Fred Davis and others (eds), Information Systems and Neuroscience (Springer 2021).

59 Maximilian Förster and others, ‘User-centric explainable AI: design and evaluation of an approach to generate coherent counterfactual explanations for structured data’ (2022) Journal of Decision Systems (Latest Articles), https://doi.org/10.1080/12460125.2022.2119707 (accessed 15 January 2024).

60 Rui Zhang and others, ‘An Ideal Human: Expectations of AI Teammates in Human-AI Teaming’ (2021) 4 Proceedings of the ACM on Human-Computer Interaction 1.

61 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC [2016] OJ L119/1.

62 The new legislative framework, adopted in 2008, deals with conditions for the placement of products in the EU internal market. The framework clarifies rules for accreditation and CE marking and provides a toolbox for consistent sector-specific legislation; according to the 2022 evaluation, the NLF has achieved most of these objectives; see European Commission, ‘Executive Summary of the Evaluation of the New Legislative Framework’ (Commission Staff Working Document) SWD(2022) 365 final SWD(2022) 364 final

63 Recital (61) EU AI Act; see also Martin Ebers, ‘Standardizing AI - The Case of the European Commission’s Proposal for an Artificial Intelligence Act’ in Larry DiMatteo, Michel Cannarsa and Cristina Poncibò (eds), The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics (Cambridge University Press 2021).

64 The European Committee for Electrotechnical Standardization (CENELEC) is an association that brings together the National Electrotechnical Committees of 34 European countries, which work together to prepare voluntary standards in the electrotechnical field.

65 Michael Veale and Frederik Zuiderveen Borgesius (n 36), citing the Case C-171/11 Fra.bo SpA v Deutsche Vereinigung des Gas- und Wasserfaches eV [2012] ECLI:EU:C:2012:453, as well as the opinion of Advocate General Trstenjak in the same case.

66 Araz Taeihagh, ‘Governance of Artificial Intelligence’ (2021) 40 Policy and Society 137.

67 Linda Senden, ‘Soft Law, Self-Regulation and Co-Regulation in European Law: Where Do They Meet?’ (2005) 9 Electronic Journal of Comparative Law, https://ssrn.com/abstract=943063 (accessed 15 January 2024). In this context, technological management must also be part of the regulatory framework; see Roger Brownsword, Law, Technology and Society: Reimagining the Regulatory Environment (Routledge 2019).

68 Claudio Novelli, Mariarosaria Taddeo and Luciano Floridi, ‘Accountability in Artificial Intelligence: what it is and how it works’ (2023) AI & Society, https://doi.org/10.1007/s00146-023-01635-y (accessed 15 January 2024).

69 In this context, the Commission’s proposal moves in the right direction; for some types of serious infringements, it provides for administrative fines of up to 30 000 000 EUR or, if the offender is company, up to 6% of its total worldwide annual turnover for the preceding financial year, whichever is higher (Article 71 par. 3).

70 Michael Veale and Frederik Zuiderveen Borgesius (n 36).

71 Nevertheless, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) highlight that certain aspects of the AI regulation proposal are unclear, such as the roles, powers, and (most importantly) independence of market surveillance authorities; see European Data Protection Board and European Data Protection Supervisor, ‘Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence’ (2021) EDPS-EDPB Joint Opinion.

72 Georgios Pavlidis, ‘The birth of the new anti-money laundering authority: harnessing the power of EU-wide supervision’ (2023) Journal of Financial Crime (Latest Articles), <https://doi.org/10.1108/JFC-03-2023-0059> accessed 15 January 2024; Georgios Pavlidis, ‘Learning from failure: cross-border confiscation in the EU’ (2019) 26 Journal of Financial Crime 683.

73 Moreover, it will be difficult to identify a single competent authority for an AI operator that is active on several national markets; Martina Anzini, ‘The Artificial Intelligence Act Proposal and its implications for Member States’ (2021) EIPA Briefing 2021/5.

74 Peter Cihon, ‘Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development’ (2019) Technical Report, Future of Humanity Institute, University of Oxford; see also Oxford Analytica, ‘EU’s leadership on AI governance faces tough tests’ (2021) Expert Briefings, https://doi.org/10.1108/OXAN-DB261321 (accessed 15 January 2024).

75 Stefan Larsson, ‘AI in the EU: Ethical Guidelines as a Governance Tool’ in Antonina Bakardjieva Engelbrekt and others (eds), The European Union and the Technology Shift (Palgrave Macmillan 2021); Anna Marchenko and Mark Entin, ‘Artificial Intelligence and Human Rights: What is the EU’s approach?’ (2022) 3 Digital Law Journal 43.

Additional information

Notes on contributors

Georgios Pavlidis

Georgios Pavlidis is UNESCO Chair & Jean Monnet Chair and Associate Professor of International and EU Law at Neapolis University Pafos, Cyprus

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 381.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.