Publication Cover
Contemporary Social Science
Journal of the Academy of Social Sciences
Volume 18, 2023 - Issue 5
1,027
Views
0
CrossRef citations to date
0
Altmetric
Articles

Policy making and artificial intelligence in Scotland

ORCID Icon
Pages 618-636 | Received 06 Nov 2023, Accepted 04 Dec 2023, Published online: 18 Dec 2023

ABSTRACT

The article presents an exploratory qualitative single case study about whether and how artificial intelligence (AI) is used by the Scottish Government, about the key concerns relating to its usage, and about obstacles to, and drivers of AI usage. Besides the academic literature and published reports, the analysis rests on 12 semi-structured interviews. Interviewees include Scottish Government employees, experts from academia and representatives of commercial and non-commercial AI and Big Data organisations. The article finds that the Scottish Government has, so far, made little use of AI. Currently, AI is used in very limited ways in process automation and for gaining ‘cognitive insights’ with the human in control. There are no ‘strategic’ AI applications where advanced reasoning and ‘decision-making by algorithm’ play a role. Data-driven e-policy making is not currently on the cards. The reasons are the Scottish Government’s wariness of AI, a lack of ‘digital maturity’ (concerning Big Data and digital infrastructure, but also expertise) in the public sector, and ethical concerns around the use of AI. Governments need to conduct a debate about the extent of AI usage to avoid ‘AI creep’ in their institutions and to assure that AI does not have negative consequences for democracy.

1. Introduction

Artificial intelligence (AI) can be defined as a constellation of technologies – from machine learning to natural language processing – that allows machines to sense, comprehend, act and learn intelligently with some degree of autonomy (see Zuiderwijk et al., Citation2021 for a summary of definitions). AI, in other words, is a system with the ability to think and learn (Russell & Norvig, Citation2016). Such systems do so on the basis of ‘Big Data’ – large, diverse and quickly accrued data from different sources (Höchtl et al., Citation2016) – and use (self-improving) algorithms in ways that establish relationships between data which humans might not have expected or would have been incapable of envisaging or at this speed. AI has been discussed in technological and social scientific terms for many decades, sometimes hyperbolically, both in academic circles and by the wider public. Expert surveys have indicated that some believe that AI will outperform humans in all tasks by the middle of the twenty-first century (e.g. Grace et al., Citation2018), and some have worried that AI poses a severe threat to our civilisation (e.g. Bostrum, Citation2014; Russell, Citation2019).

AI has been a significant economic growth factor (PwC, Citation2017) and has become key to everyday transactions such as banking and online recommendation systems as technological developments around AI have gathered pace (UKRI, Citation2019). Since 2022, easy-to-use and publicly available chatbots such as Microsoft’s Chat GPT (Generative Pre-trained Transformer) and Google’s Bard (e.g. Hern, Citation2023; Lock, Citation2022), able to generate creative and detailed answers to difficult prompts and to engage in back-and-forth conversations, have put AI into the headlines. These developments have led politicians, AI experts, and even the leaders of AI technology companies to express concern about the potentially severe negative impact of AI on the whole of society (see e.g. Future of Life, Citation2023; Naughton, Citation2023) and have led to first steps towards a global effort to regulate AI (e.g. Milmo & Stacey, Citation2023; UK Government, Citation2023).

Already some time ago national governments and the European Union, through the European Parliament, started to design regulation for the application of AI (e.g. Digital Scotland, Citation2021; European Parliament, Citation2023; UK Government, Citation2021). A 2019 review for the OECD found that at least 50 countries had developed, or were in the process of developing, national AI strategies (Berryhill et al., Citation2019). Such high-level political commitment to regulation shows that the impact of AI on society is taken seriously. Discussions and regulatory efforts also address the question of how governments may themselves use AI for policy design and decision making (Thierer et al., Citation2017). This concern should not surprise given that the public sector’s ‘digital data troves’ are bigger and are growing at a faster rate than those of the private sector (Peled, Citation2014, p. 562) so that there is ample ‘material’ for AI to work with.

On the back of such developments, this article presents an exploratory qualitative single case study about whether and how AI is used by the Scottish Government for policy making, about the key concerns relating to its usage for this purpose, and about obstacles to, and drivers of, AI usage. In combination, these points address a significant gap relating to empirical research on how governments in liberal democracies use AI.

The remainder of the article develops as follows. After a brief methodological section, a short section presents ‘where Scotland is’ with regards to AI regarding government regulation and its emergent AI institutional landscape. This is followed by a literature review. In the main part, interview and document data are used to build the case study. A conclusion closes the article.

2. Methodology

This Scottish Government is the focus of this case study. Case studies are a form of research ‘defined by interest in an individual case, not by the methods of inquiry used’ and where ‘the object of study is a specific, unique, bounded system’ (Stake, Citation2008, p. 443), observed at ‘a single point in time or over some delimited period of time’ (Gerring, Citation2004, p. 342). The article provides a descriptive exploration of the issue whilst also drawing out some explanations of the shape of the Scottish Government’s approach to AI usage. Qualitative research is subject of criticism because its findings are usually not deemed generalisable. However, those who define ‘a case as an instance of a class of events’ object (George & Bennett, Citation2005, p. 17). A ‘strategic’ case selection (Ruffa, Citation2020) then allows for some limited and cautious generalisation ability of findings.

The case study presented here rests on the available academic literature and on publications by the Scottish Government and other relevant organisations from the private, public and third sectors. Furthermore, 12 semi-structured interviews with experts were conducted. This method, in particular, is suited to generate the basis for an ‘intensive and detailed examination of the case’ (Bryman, Citation2001, p. 48). The interviewees include employees of the Scottish Government; experts from organisations who have been involved in the design of AI for use by the Scottish Government; AI and Big Data experts from academia; and representatives of AI and Big Data businesses (see ).

Table 1. Interviewees.

These interviews, lasting between 45 min and 2 h, were carried out in spring and summer 2023 via video-conferencing software. To facilitate the recruitment of research participants and in line with the ethical approval gained from the University of the West of Scotland’s ethics committee, potential interviewees were assured anonymity so that all descriptions of their functions or backgrounds are done in ways that ensure they and their organisations are not identifiable. Some interviewees preferred not to be recorded so that only notes were taken. Where quotations are given, these were taken from the notes or from the recordings. Snowballing helped to recruit participants from the small but highly interconnected AI community in Scotland.

The interview transcripts and notes were analysed via the six-stage thematic analysis approach according to Braun and Clarke (Citation2006). To do so, interview transcripts and notes were read and re-read so that meaningful patterns were identified across the interview data. The following themes arose: wariness about AI; lack of digital maturity; and ethical obstacles. The main part of the article is structured according to these themes.

3. Scotland, AI, Big Data and the digital transformation

The Scottish Government is a good strategic case study choice for a number of reasons. It has, comparatively early, adopted an AI Strategy (in March 2021), and Scotland has itself an advanced AI industry (Byrne, Citation2022; Walch, Citation2021). The United Kingdom (UK) as a whole ranks third globally in many indexes of AI investment and development (e.g. Businesswire, Citation2021; Ingham, Citation2020).

The Scottish AI Strategy expresses the country’s ambition to be a ‘leader in the development and use of trustworthy, ethical and inclusive AI' (Digital Scotland, Citation2021, p. 19). It focuses on how AI should enable economic growth; in that, it resembles similar documents from other countries. Little is said, in this document, on how the public sector should use AI to benefit from it – the only mention of this topic states that the public sector should ‘lead by example and use AI responsibly to serve the public’ (Digital Scotland, Citation2021, p. 23). However, there is strong emphasis on the need for regulatory systems which address data protection, algorithmic transparency, AI standardisation, AI ethics and trust, and AI risk mitigation (Digital Scotland, Citation2021).

Beyond the AI Strategy, several government-driven initiatives have sought to promote AI, Big Data and, more generally, the digital transformation of the country. For example, in 2014 the Data Lab was set up as ‘Scotland’s National Innovation Centre for data science’ to facilitate collaborative innovation projects between industry and academia and to promote AI and Big Data postgraduate study programmes (Data Lab, Citation2023). Since 2016, the Scottish Government has run the CivTech programme to bring ‘people from the public, private and third sectors together to build things that make the world a better place’. So-called ‘challenges’ can be put out to the technology community by public sector bodies – including the Scottish Government itself – in search for technology-based answers, possibly using AI, to existing problems. Those ideas deemed most suitable can be supported to a stage at which they could be ‘commercialised’ (CivTech, Citation2023). Furthermore, the Scottish Government has, in 2018, through the Data Science Accelerator Programme – jointly founded with NHS National Services Scotland, the National Records of Scotland, Registers of Scotland and The Data Lab – made support available to explore AI in the public sector and to skill public sector employees in data sciences (Digital Scotland, Citation2021, p. 16). In an effort to bring all sectors to one table, the government instigated the establishment of the AI Alliance for Scotland in 2021 in order to ‘shape our AI future’ by creating a ‘vehicle for everyone to have their say and be heard’ (Digital Scotland, Citation2021, p. 30). The Alliance is also tasked to deliver the AI Strategy and is in charge of the AI Register (Scottish AI Register, Citation2023). This register was launched in March 2023 to ‘make transparent the development and use of AI in the public sector and offer the public a simple and effective way to have a say in how AI is used to make decisions and deliver public services’ (AI Alliance, Citation2023) and shows, in late 2023, only three entries. Related to the register, the Alliance also set up the Scottish AI Playbook to offer an ‘open and practical guide to how we do AI in Scotland’ in the form of a wiki, curated by the Scottish AI community (Scottish AI Playbook, Citation2023). Last in this list is, driven and financed by local government, the Scottish Government and third parties, the Data-Driven Innovation (DDI, Citation2023) Programme within the Edinburgh and South East Scotland City Region Deal and its Bayes Centre set up at the University of Edinburgh in 2018.

With regards to specific policy fields, it is health which stands out as the one in which the Scottish Government drives forward an explicit AI agenda. For example, in 2021 the government promised an ‘AI Hub for Life Science, NHS and social care’ (Scottish Government, Citation2021, p. 81).

Further initiatives are concerned with Big Data. Within the Scottish Government’s Digital Directorate, the Data Division is tasked with creating the foundations for ‘open government’ of which public sector data maturity, easier accessibility and clearer presentation of data, and the AI Register are described as key components (Meikleham, Citation2023). A similar initiative is the government-funded Research Data Scotland which offers access to public sector datasets for research, innovation and investment (Research Data, Scotland, Citation2023). The COVID-19 of 2020/21 pandemic acted, in some ways, as a catalyst for the exploration and usage of Big Data by the Scottish Government. Through the Data and Intelligence Network, set up in May 2020, data experts came together to ‘develop time data and intelligence solutions to inform strategic government policy’ over COVID-19 (King, Citation2021). In 2021, the government-led Digital Strategy in which Scotland is envisaged as a ‘digital nation’ (Digital Strategy, Citation2021) was published. While AI is rarely mentioned, the strategy is concerned with the digital transformation of economy, education and public sectors, and with the management of Big Data.

In summary, the Scottish Government has driven a range of initiatives around AI, Big Data and the digital transformation to develop regulation and fora for communication, cooperation and innovation. These demonstrate an interest in the development of Scotland as a technology leader but say little about what the Scottish Government itself does or wishes to do around AI, specifically where it concerns policy making.

4. Literature review

The potential and actual use of AI by government has spawned a partly descriptive, partly exploratory and partly ‘advisory’ academic literature. The latter literature is, in its majority, optimistic in its promotion of AI as a means to make government ‘smarter’, ‘more effective’ and ‘more efficient’ and imbued with a strong sense that technology can deal with a significant number of issues that frustrate current policy making processes, while standards, regulation and codes can deal with ethical problems. This is not to say that there is no note of caution in the literature, as the following review will show.

4.1. Policy making and AI – transforming government?

While it is true that ‘automated politicians’ (Lago, Citation2021) and ‘AI democracy’ with personalised algorithms voting on our behalf (Susskind, Citation2020) are a long way off, and that AI may not ‘yet have pervaded the public sector’ (Carlizzi & Quattrone, Citation2022, p. 67), it is certainly the case that policy making with the aid of AI has at least been discussed for some time now. Here, the literature proposes the concept of e-policy making. It goes beyond the idea of ‘incorporating technology into the policy making process as a mere vehicle that increases productivity thanks to improved information-processing capabilities’ (Höchtl et al., Citation2016, p. 148); rather, it suggests a deep structural change to how policy is made.

Such change, due to the ‘arrival’ of AI in government and requiring a Big Data infrastructure, is discussed as a deep-seated ‘digital government transformation’ (Eom & Lee, Citation2022). Much of the academic debate on this transformation is characterised by optimism over the application of AI to policy making and decision making. Vogl et al. (Citation2020, p. 947), for example, speak of ‘algorithmic bureaucracy’ as a new approach combining ‘people, computational algorithms, and machine-readable electronic files and forms to deal with complexity and overcome some of the limitations of traditional bureaucracy, while preserving core public sector values’. Others speak of ‘AI-augmented bureaucracy’ that can deliver ‘smart government’ (Michael & Chen, Citation2020), while Carlizzi and Quattroni argue that a paradigm shift will see us move to ‘precision government’, a governance model that determines public decisions on the basis of the evidence emerging from the collection and processing of large quantities of appropriate data, processed by AI algorithms’ (Citation2022, p. 81). Similarly, Sætra promotes a ‘technocracy of AI’ in which AI decides on the best courses of action to attain human-set policy goals to improve human societies (Sætra, Citation2020). What such optimistic endorsements of AI’s future role in policy making and decision making have in common is the expectation that AI and Big Data will lead to better public policy. Dear (Citation2019), for example, argues that AI may allow earlier interventions, provide correct challenges to human expert advice, and compel decision makers to be more explicit about their decisions and therefore enforce more rigour in decision making. Kuziemski and Misuraca (Citation2020) are also positive about AI and how it can ‘provide enormous benefits in terms of improved efficiency and effectiveness of policy making and service delivery’ thus enhancing citizens’ satisfaction and trust in the quality of governance and public service. Cost-savings through the use of AI, especially in times of public sector frugality, is another and related driver not only in low-income countries (Masanja & Mkumbo, Citation2020) as staff can be reduced or deployed elsewhere.

In whatever way AI is used, the emergence of Big Data is by many seen as a key driver of AI use (Boyd & Wilson, Citation2017; Thierer et al., Citation2017). Much of the literature on AI and policy making meets the emergence of Big Data with a lot of enthusiasm when it talks of ‘data-driven policy making’. Data-driven policy making should be understood similarly to the ‘romantic stories’ about ‘evidence-based policy making’ (Cairney, Citation2018, p. 200) where evidence forms the supposedly ‘objective body of knowledge’ that, in positivistic vain, allows making ‘the right’ policy decisions untarnished by interests, bias, values or even human subjectivity. With Big Data, and AI to make sense of it, ‘data-driven policy making’ appears to supplant evidence-based policy making. This is because ‘true data’ supposedly bestows unquestionable legitimacy on policy solutions (Starke & Lünich, Citation2020), a legitimacy strengthened by the absence or marginal position of ‘the human’ and human values in the data selection and curation process. While some highlight that data quality is one of many challenges to AI use in government settings (e.g. Wirtz et al., Citation2019), and others caution that the assumption of ‘more data beats better data can be countered with the argument “garbage in – garbage out”’ (Höchtl et al., Citation2016, p. 152), there is so far little in the literature that clearly points out the fallacies of the notion of data-driven policy making. The view that ‘artificial intelligence has proven to be superior to human decision-making in certain areas’ in particular where ‘advanced strategic reasoning and analysis of vast amounts of data’ are necessary to deal with complex problems prevails (Sætra, Citation2020, p. 3). Amongst the exceptions are Newman and Mintrom, who point out that the increased use of algorithms in conjunction with Big Data for decision making and policy making is shifting what ‘evidence-based’ policy making actually means. After all, it was always presumed to involve humans, not just ‘in the loop’ but ‘in the lead’. An ‘update’ of what evidence-based policy and of what evidence means is needed, in their view (Newman & Mintrom, Citation2023, p. 1854).

The literature proposes different classifications of how governments may make use of AI. Michael and Chen (Citation2020) identify nine uses of AI in government settings – smart allocation of public service resources, digital assistance through chatbots, pattern identification and predictive analytics models, automation and regu-tech, smarter public utilities, smart energy, the Internet of Things and robotic sensors, autonomous driving, and sensor-based detection and prevention. Most classificatory attempts seek to differentiate use along a continuum of ‘complex’ or ‘simple’ AI application. For example, Bader et al. (Citation1988) proposed that AI can support policy makers by assisting in critical situations, by giving a second opinion, by being an expert consultant or tutor, or by providing automation. In a 2018 review of the literature, Davenport and Ronanki (Citation2018) found three types of usage – policy makers can use AI for process automation, cognitive insights or cognitive engagement. Duan et al. (Citation2019) examined AI functions on three organisational decision-making levels – strategic, tactical and operational. They found that on the strategic level there are severe limitations to using AI presently. A further way of categorising AI usage in the public sector is the three-fold categorisation of improving the internal efficiency of public administration, improving public administration decision making and improving citizen-government interaction (Samoili et al., Citation2020). Campbell-Verduyn et al. (Citation2017) also place the distinction between complex and simple AI application in policy making on a continuum of different levels of AI–human interaction. The simplest level they call decision making through algorithms, which comprises simple tasks with specific, instrumental application of Big Data and a large span of control for human decision makers. Decision making by algorithms is the most complex kind as it endows the AI with a more comprehensive and independent role in a task.

4.2. AI usage by governments

The academic literature discusses some, but not many, examples of AI usage by government. It seems that there are only very few instances where AI is used in the sense of e-policy making as outlined earlier. De Sousa et al. (Citation2019) provide a thorough account of the literature discussing actual applications of AI in the public sector. Employing the notion of ‘government functions’, they found that AI was employed in most areas of government, with ‘general public service’, economic affairs and environmental protection being the most-discussed areas. Mostly, AI application by government revolved around energy consumption forecasts, agricultural management, waste-related management systems and health care. Actual AI usage that goes beyond ‘mere’ automation of relatively simple processes was found to be limited. Other research has highlighted examples of decision making on immigration permits in Canada and of profiling of benefit recipients into different support categories by Polish job centres (Kuziemski & Misuraca, Citation2020). With regards to the latter case, Kuziemski and Misuraca note that job centre staff rarely questioned the decisions taken by algorithms. This finding lends credence to the worry that officials become ‘overly passive or deferential human counterparts to AI’ (Barth & Arnold, Citation1999, p. 349). The Dutch ‘SyRI scandal’, breaking in 2021, where thousands of people had been wrongly suspected of, and punished for, child benefit fraud, is one example of undesirable outcomes of the ‘digital welfare state’ and AI-led ‘automated surveillance’ (Amnesty International, Citation2021; Zajko, Citation2023) on the basis of poor data and little-understood algorithms and without humans ‘in the loop’. The Australian ‘Robodebt’ algorithm, supposed to recover presumed overpayments to social welfare recipients, was another failure that demonstrates how increasing automation and decreasing human oversight can harm citizens and democratic governance (Braithwaite, Citation2020).

In the UK, local government started using AI in decision making on benefit claims and other welfare issues a while ago, in particular through autonomous agents (chatbots) and predictive analytics decision assistance tools (Vogl et al., Citation2020). Varakantham et al. (Citation2017) argue that AI has been used to make cities ‘smarter’ and has improved the quality of life of their citizens. Their prime example is Singapore, while in Toronto they assess the smart city project to have failed, also because citizens rejected the role that Google – i.e. ‘Big Tech’ – was to play in the city’s efforts. In Dubai, AI is used to make the city ‘smarter’, but not only to manage traffic flows or public services. It is also employed to ‘perpetuate the segregation of urban space along ethno-racial lines’ (Ziadah, Citation2021, p. 9). Perhaps this example demonstrates the difference between AI application in open and in closed societies (Creemers, Citation2018).

While there are still very few examples, at least in the academic literature, of how AI is used in complex areas of actual policy making, AI is at least being thought of in terms of its uses for complex resource allocation problems. For example, Valle-Cruz et al. have put forward optimistic ideas on how to use AI to change public budgeting processes with a view to ‘increase GDP, decrease inflation and reduce the GINI index’ (Citation2022, p. 1). Others suggest that an ‘AI Economist’ could optimise tax policies, improving productivity while simultaneously reducing income inequality (Zheng et al., Citation2020, Citation2022). But such AI-driven economic simulations for policy making are still rare and are not, it seems, in use by any government despite widespread excitement, in the literature, about the potential of AI usage to ‘enable a new approach to economic design’ and to ‘address the modern economic divide’ (Zheng et al., Citation2022, p. 11). To conclude this section, it remains to be seen whether the COVID-19 crisis has given a boost to the use of AI in decision making and contributed to its legitimacy in the public view (Eom & Lee, Citation2022).

4.3. Challenges and obstacles

The usage of AI by governments is beset by a range of problems, ethical and otherwise. Algorithmic bias is among the most frequently discussed issues, as AI systems are bound to reflect some of the implicit ideas and values of their creators and sometimes even their explicit interests. Sætra (Citation2020) seeks to address this criticism by arguing that human decision makers are also never free of bias, values and ideas. A further issue revolves around the demand that AI becomes explainable and thus transparent. This is to be achieved by open AI registers, for example. Some criticise this demand. Comparing AI decision making to how bureaucrats make decisions and claiming that few citizens understand why these decisions are made, Sætra asks whether citizens really need to understand how AI makes decisions for as long as shared policy goals are attained. He also argues that the lack of understandability is the price for better decisions (Sætra, Citation2020). In Coyle’s (Citation2020) words, there is ‘often a trade-off between performance and explainability’. In addition, Janssen and Kuk (Citation2018) suggest that a full understanding of how complex algorithms operate is most likely restricted to the ‘happy few’ anyway. Conversely, some argue that AI-driven policy design can ‘democratise policy making, through easily accessible open-source code releases that enable a broad multi-disciplinary audience to inspect, debate, and build future policy making frameworks’ (Zheng et al., Citation2022, p. 11).

Accountability is a further key issue. If a computer makes a decision, who is held accountable for this decision – the person who provided the data, the person who built the AI or the organisation that procured from a private provider (Mikhaylov et al., Citation2018, p. 12)? This accountability gap is a problem that becomes increasingly prominent as technologies become more capable and make decisions that could not have been foreseen by the maker of the technology (Wachter et al., Citation2017).

Tied to the challenge of accountability is that of political legitimacy. Those who warn of the wholesale application of AI to politics suggest that automated decision making, undertaken by algorithms, can make little claim to political legitimacy among citizens in democratic societies (Starke & Lünich, Citation2020). As Ingrams et al. (Citation2021, p. 391) say, ‘the ultimate challenge then for AI applications in the public sector is how to deliver instrumental benefits while avoiding a range of problems from the perspective of citizens that amount to a negative impact on government trustworthiness’. To deal with such issues, some advise that AI should be limited to ‘collaborative problem-solving efforts, where a human defines the problems, machines help to find the solutions, and the human verifies the acceptability of those solutions’ (Eggers et al., Citation2017, p. 12). Decision making would then be left to humans, whereas routine, repetitive work processes can be transferred to the machine (Wirtz & Müller, Citation2019).

Such thinking relates to the question ‘where to stop’ (Ahn & Chen, Citation2022), and is reflected in discussions about a global AI moratorium in 2023 (Future of Life, Citation2023). In order to address these and further issues, some have emphasised the importance of AI standards and ‘public codes’ of AI ethics and the importance of ethics ombudsmen (Wirtz & Müller, Citation2019).

Some report that governments suffer from a lack of AI expertise and talent (Chen & Lee, Citation2018; De Souza et al., Citation2020; Medaglia et al., Citation2023). This hampers AI use and requires a significant involvement of the private sector when it comes to developing AI solutions and training public sector staff. However, cooperation between the sectors is not always easy as clashes can occur over the goals of collaboration and differing ways of working. For example, private sector companies are less constrained by bureaucratic procedures and the political accountability that comes with them. Logics may conflict between public, private and third sector organisations so that a switch from ‘us and them to we’ is necessary, but hard to make. In short, there are ‘vast managerial complexities’ (Mikhaylov et al., Citation2018, p. 18).

Kuziemski and Misuraca (Citation2020, p. 4) argue that the public sector is sometimes reluctant to adopt AI into its toolkit because of the ‘sunk costs of the legacy IT systems’ and because of negative experiences around developing and procuring new technologies in the past. Zuiderwijk et al. (Citation2021, p. 1) stress that governments may also hesitate because failures in the use of AI may have ‘strong negative implications for government and society’, for example, a severe loss of trust as a consequence of failed attempts to improve governance, administration and policy. With regards to success or failure of AI usage by the public sector, some have found that government employees’ attitude and willingness to engage with AI matter in creating a digital transformation (Ahn & Chen, Citation2022).

5. Main part: AI and the Scottish government

This section uses the data from interviews and publicly available documents to generate an analysis of how in Scotland – a democratic country with a comparatively well-developed AI industry and with a government leading a number of AI and Big Data initiatives – the government is or is not employing AI in e-policy making. Three themes emerged from the interview notes and transcriptions: wariness about AI; lack of digital maturity; and ethical obstacles.

5.1. Wariness about AI

While the Scottish Government has instigated the development of an AI Strategy for Scotland even before the UK Government, while there is government support for the Scottish AI industry, for example, through the DDI programme, and while the Scottish Government has facilitated organisations such as the Scottish AI Alliance, the Data Hub and CivTech to promote the development and use of AI ‘for the public good’, the Scottish Government itself still appears to make little use of AI: ‘There is little high-end use of AI’, as an AI expert described the absence of AI usage on the cognitive engagement level (Interview 3). In an FOI response, the Scottish Government said that ‘currently the AI use in the Scottish Government is mainly focused on the delivery of internal or external functions and the analysis of data, rather than policy development’ (FOI response, Citation2023). This is so despite the government’s relatively frequent sponsoring of CivTech challenges: it ‘has not taken many emerging ideas forward to a stage of piloting concrete applications’, as an employee of a public sector organisation said (Interview 1). Without exception, interviewees perceived the Scottish Government as ‘generally risk-averse with regards to AI’ (Interview 1). Nonetheless, ‘the civil servants are excited about AI, you get the sense that they are keen on the technology when talking to them. They have an open spirit, but they and the politicians don’t have the courage to start from scratch’, as an AI expert said referring to the disruptive nature of a ‘full’ digital transformation (Interview 2).

If such a digital transformation – which would amount to a ‘culture change’, according to how the then Head of Data at the Data Lab described the challenge (Hills, Citation2017) – requires such a fresh start, ‘ingrained and hard-to-break-up structures’ in the government (Interview 2) would make this very difficult. One interviewee, a civil servant, criticised what he saw to be the still-predominant way of policy making:

The culture of policy making is characterised by an approach which starts with an ‘intuition’ about a problem and possible solutions for which then evidence is sought. However, a change towards AI supported whole-system modelling is taking place and therefore a more data-based policy making approach. This will disrupt policy making as we know it. (Interview 5)

The reluctance to use AI in such ways is also underpinned by the view that there is little ‘room for failure’ when it comes to the introduction of new technology given that the Scottish public have, if surveys are to be believed (Behr, Citation2021), little faith in AI. This is connected to what a civil servant describes as a ‘history of IT failures’ (Interview 3). These failures are often associated with large consultancy firms: ‘They come in, highly paid, claiming an expertise that they don’t really have, and they push technology on the public sector that it doesn’t need and that they cannot get to work’, as a former consultant said (Interview 8). This results in a belief, within the public sector, that ‘technology does not work, and that it is always something outsourced to others, something we cannot control’ (Interview 8).

Lastly, there is also relatively little demand, it seems, for AI to play a key role in policy design or evaluation. One interviewee was adamant that e-policy making was not a realistic prospect and instead thought that ‘AI will be an evidence facilitator, but not produce policy results’ (Interview 3). Some also questioned the usefulness of AI in policy making: ‘Why should the Scottish Government use it? They have people to do the thinking!’ (Interview 4). In other words, most interviewees did not see AI driving a step change in policy making towards data-driven e-policy making.

5.2. The lack of digital maturity

Beyond a general wariness about what AI can or should contribute to policy making, interviewees suggested that the Scottish Government and the wider public sector are not digitally mature enough to use AI. Three key aspects around digital maturity were named by interviewees. First, the public sector has not yet made the technical step into the digital transformation and towards Big Data, both preconditions for AI usage: ‘Integrated data-driven decision making is future music for the public sector’, as an AI expert said because data were not currently always curated so that it becomes ‘good data’ (Interview 2).

Second, as the Big Data sets needed for AI are not usually owned by one single organisation, they require linkage. However, technical obstacles combine with ethical hurdles in that there is at least the perception among Scottish Government officials that there is public resistance to linking datasets and making them accessible to private companies tasked, by government, to develop AI solutions, as a consultant said (Interview 4).

Third, the absence of official guidance as to the use of AI within government is a problem for systematically introducing AI into the public sector. In a FOI response from 2022, the Scottish Government said that ‘there is no set advice or recommendation made by the Scottish Government to other public services on the evaluation of AI’. Only the relatively high-level Scottish AI Strategy and the UK’s Information Commissioner’s Office (Information Officer, Citation2023) are referred to as sources of guidance (Scottish Government, Citation2022). In the main, the Scottish Government expects civil servants to be cautious about the quality of text outputs and to never input sensitive data into these AI applications while encouraging curiosity about new technology (FOI response, Citation2023; also Riley-Smith, Citation2023). Such absence of strong central guidance may leave room for experiments and innovation, for example within arms-length bodies. However, it also may result in ‘little communication and exchange’ between these bodies and government, as one interviewee assessed the situation (Interview 1). Furthermore, the absence of guidance could also have effects on the procurement of AI applications: ‘There is too little instruction and regulation with regards to the nature and ethics of AI systems’, as a public sector AI expert said (Interview 10).

Fourth, many interviewees said that expertise and talent are a problem within government ranks. Those in charge of making decisions over the usage of AI and Big Data often lack expertise. Referring in the main to Big Data, the then Head of Data at the Data Lab said: ‘Many industry and public sector leaders do not understand how data could be leveraged to add value to their organisation’ (Hills, Citation2017). This means that there is a risk that government buys in ‘a system that it doesn’t really understand’ (Interview 2), pushed towards them by Big Tech or consultants who do not advise ‘what benefits the public sector but who want to make money’ (Interview 8) whilst providing ‘answers only to problems for which they have the solutions on the shelf’, as a public sector AI expert said (Interview 9). Training of the public sector workforce then was identified by many interviewees as crucial so they become ‘technology practitioners, rather than technology users, able to challenge the consultants and Big Tech’ (Interview 8).

5.3. Ethical obstacles

Amongst the key ethical obstacles to a wider usage of AI by the Scottish Government appears to be the perception that citizens worry about the involvement of the private sector, in particular Big Tech, in it. This reluctance is rooted in how public data – for example, that held by the health service – would be made available to private companies entrusted with developing AI solutions and thereby ‘monetised’ (Interview 3). However, as the Scottish Government has no in-house AI development expertise the private sector will be needed to supply AI solutions and the Scottish Government will ‘wait for the private sector to demonstrate how AI can help the public sector’, as one interviewee suggested (Interview 1). Linked to this is the issue of procurement. Interviewees said that Big Tech – and, indeed, the big consultancies – may not be the first port of call for the Scottish Government. The Scottish Government was said, by some interviewees, to look for AI solutions beyond Big Tech because of reputational problems following scandals such as that around Cambridge Analytica and its use of data from Facebook (e.g. Hinds et al., Citation2020). The expectation is that smaller and ‘more local’ companies are easier to deal with, are more likely to be open about the technical detail of their product and are more willing to deliver tailored solutions whilst they may also be more acceptable to the public. However, in order to develop those tailored solutions these often young companies require ‘clear value propositions which are not currently forthcoming from government’, as a member of an innovation agency said (Interview 6). At the same time, interviewees cast doubt on the ability of the Scottish technology sector to produce what may be needed for e-policy making (Interview 12).

One of the tasks of organisations such as the AI Alliance is to ‘soften up’ the public to the usage of AI and Big Data, while the AI Register is to address concerns around algorithmic transparency. Some interviewees, however, described their perceptions of how transparency and openness were not always realised even within the Scottish Government. For example, at certain moments during the COVID-19 pandemic some in the Scottish Government ‘spoke about cutting all the ethics crap’ (Interview 7) to allow for speedier responses to the crisis, as a public servant said. Even where projects involving data are not conducted under conditions of crisis, civil servants too often look at ethics as a ‘tickbox exercise rather than as something that can make their project better’ (Interview 11).

6. Summary and conclusion

The data – interviews, FOI responses and also the Scottish AI Register itself – suggest that AI is used by the Scottish Government, but only in very limited ways. It is not used on the level of cognitive engagement, to use Davenport and Ronanki’s typology (Citation2018). For the moment, simple AI applications, designed to allow cognitive insights and to allow process automation with the human in control, will remain the mainstay of AI in Scotland. ‘Strategic’ applications with advanced reasoning and the analysis of vast amounts of data and ‘decision making by algorithms’, without a human in the loop, are absent. In other words, ‘algorithmic’ bureaucracy, ‘AI-augmented bureaucracy’ or data-driven e-policy making are still a long way off in Scotland where government is still mostly occupied with AI as a regulator, rather than as a user. It seems that a more fundamental digital government transformation, including the development of ‘better Big Data’, would have to occur evenly across the public sector before AI could be used in ways that some of the more optimistic literature envisages. The literature review shows that governments in comparable polities are similarly reluctant users of AI. Failures such as those in the Netherlands and in Australia may have contributed to Scotland’s reluctance to translate the much-vaunted strength of its AI and Big Data sectors into public sector usage. But this is a question that requires further empirical research. Whether a transformation towards data-driven e-policy making is desirable from the perspective of democratic theory is another question. Here, the idea of ‘public value’ is informative. It calls for opening up policy processes and for rebalancing power relationships so that citizens have a greater role in shaping policy (Moore, Citation1995), and some have discussed whether a paradigmatic shift towards ‘public value management’ in the context of British governance has occurred (Conolly & van der Zwet, Citation2021). This question should be raised also with a view to how AI and Big Data are or should be employed in policy making.

The literature as discussed is more optimistic about the benefits of AI and the likelihood of AI actually becoming used at scale in policy making than the interviewees are in this case study. The current reluctance in Scotland to make more use of AI more may reflect the despair that is palpable in some academic literature which criticises governments for ‘still provid[ing] services in an old-fashioned way’ (de Sousa et al., Citation2019, p. 2). However, as some of the interviewees suggested, slowness might be a virtue to avoid risk of failures and disasters that are not simply technological in nature. Also, given that the development of algorithms and the curation and integration of Big Data are, to a greater or lesser degree, in the hands of private companies, the reluctance towards the uptake of data-driven e-policy making could indicate that the Scottish Government is wary of a ‘hollowing out’ (e.g. Bevir & Rhodes, Citation2003) of the state, i.e. the loss of capability and capacity. Again, this requires more research.

Surprisingly little was said in the interviews about what some in the literature discuss as the de-politicising of policy making when not only ‘the evidence’, but ‘Big Data’ are used to invoke claims that ‘neutral’, ‘objective’ or ‘unbiased’ policy making is a (desirable) possibility. However, it is important to bear in mind that e-policy making may foster a practice which removes politics from policy making and re-allocates power from politically accountable actors to those who compile Big Data and create the algorithms that propose policies and make decisions.

Despite the absence of AI-based e-policy making in Scotland’s here and now, there is an urgent need for debate about the use of AI. Governments should welcome such debate and, indeed, promote it – both with the public and internally. Without such debate what Sætra warns of as ‘AI creep’, where AI is slowly and almost unnoticeably applied to ever new areas (Sætra, Citation2020, p. 2), is likely to occur. This would especially be the case if AI were to advance towards ‘machine superintelligence – general artificial intelligence greatly outstripping the cognitive capacities of humans’ (Bostrum et al., Citation2020, p. 293).

AI is a fast-changing technology, and it can be expected that the situation around AI and e-policy making develops quickly. Therefore, further research in this area would help address the question of how ‘humanity can best navigate the transition to advanced AI systems’ (Perry & Uuk, Citation2019, p. 3), including the ethical aspects of this transition and the serious concerns about whether democracy will ‘survive big data and artificial intelligence’ (Helbing et al., Citation2017). The legitimacy of democracy hinges to a good extent on the transparency and openness of government and public sector organisations more widely and how they make policy and decisions. Where AI and Big Data marginalise the human and where public sector organisations develop ‘learning cultures’ and ‘learning mechanisms’ (Popper & Lipshitz, Citation1998) which significantly depend on AI and Big Data, democracy is indeed at risk.

Acknowledgments

The author would like to thank the interviewees for their time and also the journal’s reviewers for their constructive comments.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Hartwig Pautz

Hartwig Pautz is a Senior Lecturer at the University of the West of Scotland. He is interested in policy advisory systems and has published widely on think tanks, but also on German politics. He is a co-lead of the UWS-Oxfam Partnership.

References