3,990
Views
1
CrossRef citations to date
0
Altmetric
Articles

Automating public administration: citizens’ attitudes towards automated decision-making across Estonia, Sweden, and Germany

ORCID Icon, ORCID Icon & ORCID Icon
Pages 314-332 | Received 17 Aug 2022, Accepted 23 Mar 2023, Published online: 28 Apr 2023

ABSTRACT

Although algorithms are increasingly used for enabling the automation of tasks in public administration of welfare states, the citizens’ knowledge of, experiences with and attitudes towards automated decision-making (ADM) in public administration are still less known. This article strives to reveal the perspectives of citizens who are increasingly exposed to ADM systems, relying on a comparative analysis of a representative survey conducted in Estonia, Germany, and Sweden. The findings show that there are important differences between the three countries when it comes to awareness, trust, and perceived suitability of ADM in public administration, which map onto historical differences in welfare provisions or so-called welfare regimes.

Algorithms are increasingly enabling the automation of tasks in public administration of the welfare state, including applications for social benefits and protection of vulnerable groups (Schiff et al., Citation2022). As the welfare sector is confronted with challenges including shrinking resources, increasing needs of the population, and recruitment issues, public agencies – like the employment and social services – are implementing digital solutions to meet these challenges. At the same time, issues with automating administrative tasks have been identified and addressed including discrimination and bias as well as issues with trust, transparency, and accountability. While a critical discussion of automated decision-making is taking shape, the perspective of citizens who are affected by the implementation of algorithmic systems is still largely underexplored. This article reports findings from a comprehensive survey that examines the increasing implementation of automated decision-making (ADM) in the welfare sector in Estonia, Germany, and Sweden and its implications for civic agency from a comparative perspective. The article is based on material collected within a larger cross-national comparative project that explores how shifts in the administration of welfare provision that emerge with algorithmic automation or more specifically automated decision-making are experienced by the citizens who are exposed to ADM and algorithmic automation.

Broadly defined, automated decision-making refers to the process of implementing and delegating tasks and decisions to – both rule- and knowledge-based – algorithmic systems (Sumpter, Citation2018). It encompasses automated decision-making and decision support systems that are used to automatically execute decisions to perform an action (Spielkamp, Citation2018). Accordingly, ADM is a term that includes the automation of simple tasks with the help of static decision trees to automate workflows, for example with the help of robotic process automation, as well as more complex tasks that for example include predictive data analytics. Some of the most prominent examples in the three countries explored here are decisions on social benefit applications, which is in Sweden used since 2017 in the municipality of Trelleborg (Dencik & Kaun, 2020; Kaun & Velkova, Citation2018), in Germany the Bundesamt für Migration und Flüchtlinge (BAMF) has been working with text and speech recognition software to identify the origin of refugees (Kayser-Bril, Citation2020), in Estonia assigning refugees to places (Masso & Kasapoglu, Citation2020) as well as assuring public security through police risk scoring (Kasapoglu & Masso, Citation2021).

In terms of assessing the societal implications of ADM there are both enthusiastic voices heralding digital automation as the fourth industrial revolution (see e.g., Schwab, Citation2016; for critical engagement see Schiølin, Citation2020) and fundamental critique that extended automation will lead to more dystopian visions of the digital welfare state that encompasses issues of bias, discrimination and injustices (Larasati et al., Citation2022). While the identified potential of these technologies still often trumps the actual extent of implementation (Reutter, Citation2022), scholars have considered a new form of algorithmic governance to be emerging that is part of a larger reform of the public sector (Eubanks, Citation2017; Kennedy, Citation2016; Mosco, Citation2017; O'Neill, Citation2016).

In any case, the increased importance and public relevance of algorithmic decision-making in society has been acknowledged on both the policy level and in research (Gillespie, Citation2016; Lomborg, Kaun, & Scott Hansen, Citation2023). Along those lines, the General Data Protection Regulation (GDPR) contains a paragraph on automated decision-making and promotes ‘the right to explanation’ (European Parliament and Council of the European Union, Citation2016) – that is, the right to be provided an explanation of the output of an algorithm (Goodman & Flaxman, Citation2016) as well as the right to verification and rectification of an algorithmic decision by a human, which is the so-called human-in-loop provision (Dreyer & Schulz, Citation2019). In research, there is a growing interest in automated decision-making from a social science perspective more generally and a focus on the related changes and challenges in welfare provisions more specifically (Andreassen, Kaun, & Nikunen, Citation2021; Dencik, Citation2022; Dencik & Kaun, Citation2020).

To highlight the consequences of the implementation of digital technologies in public administration, several advocacy initiatives as well as research projects are currently exploring and mapping the use of ADM in this domain (European Council, Citation2018; Spielkamp, Citation2018). Furthermore, there are increasingly studies that engage with the practices and experiences of civil servants working at the interface of ADM technologies and the citizens (e.g., Ranerup & Henriksen, Citation2019). While these are important contributions to the field of critical algorithm and data studies, the citizen perspective is still rarely addressed.

In this article, we foreground the perspective of citizens who are increasingly exposed to ADM systems. To do so, we explore findings from a representative survey conducted in Estonia, Germany, and Sweden asking participants about their knowledge of experiences with and attitudes towards ADM in public administration. The article engages with the question how experiences with and attitudes towards ADM differ depending on the respective welfare regime respondents are living in rather than focusing on individual factors such as age, gender, or socio-economic background.

Background

The large-scale introduction of automated decision-making is based on a number of important developments. First, digitalization and datafication of large parts of society have allowed for the delegation of tasks to algorithms, artificial intelligence, and autonomous systems in the first place (Hintz et al., Citation2019; Kitchin, Citation2014; Mayer-Schönberger & Cukier, Citation2013; van Dijck, Citation2014); second, the welfare sector and public administration seen as being deeply in crisis. Facing limited resources and shifting demographics, pose challenges to how much welfare is provided and public administration organized (Castles, Citation2004). This is combined with, third, changes in civic engagement and trust in societal institutions (Buckingham, Citation2000; Conway, Citation2000; Milner, Citation2002; Putnam, Citation2001; Skocpol, Citation2003) that have been extensively addressed as a crisis of democracy since the early 2000s (Rosanvallon, Citation2006, Citation2008; Skocpol, Citation2003) and that are linked to ideological shifts in how welfare provision is motivated. In this nexus of datafication on the one hand and welfare and democratic crisis on the other, automated decision-making emerged as suitable and first and foremost efficient solution to larger problems.

In connection with the implementation of automated decision-making, it has been argued that civil servants are freed from repetitive and monotonous tasks with the help of algorithms (Engin & Treleaven, Citation2019). However, more critical research has addressed problems related to explainability of decisions, ethics, and accountability as well as shifts in public values that emerge with extended automation in the welfare sector (Ananny, Citation2016; Eubanks, Citation2017; Sandvig, Hamilton, Karahalios, & Langbort, Citation2016; Schiff et al., Citation2022; Reutter, Citation2022; Zarsky, Citation2015). Furthermore, research has engaged with the question of human agency in relation to complex socio-technical systems (Kitchin, Citation2016; Velkova & Kaun, Citation2019).

Beyond practical changes in the daily work of case workers and their discretion, the implications of automated decision-making are much broader as we are experiencing a shift toward a new regime of welfare provision that is intricately linked with digital infrastructures that result in new forms of control and support. This new welfare regime has been described in terms of new public analytics (Yeung, Citation2018) and algorithmic public services (Dencik & Kaun, 2020) and is characterized by forms of privatization as well as state-citizen relations to citizens (Veale & Brass, Citation2019). Since the 1990s there has been increasingly research of e-governance initiatives that have focused on the question of digital access and open data (e.g., Andersen & Dawes, Citation1991; Brown, Citation2007; Edmiston, Citation2003; Ho, Citation2002; Layne & Lee, Citation2001). However, little research has been conducted on the structural linkage between automation and the transformation of the welfare state and in extension the implications for Western democracies. Critical algorithm and automation studies are, currently, beginning to consider the welfare sector but are often focused on the US context (Eubanks, Citation2017; Reisman, Schultz, Crawford, & Whittaker, Citation2018), but rarely provide a comparative perspective on technology use and implementation in different countries going beyond a case studies approach, offering a better understanding on the social norms underlying these developments across social contexts. For example, in the US context, O’Neill (2016) demonstrates the role that mathematical models play in different sectors in society, including school administration, teachers’ evaluation, and university admissions. Further, Virginia Eubanks (Citation2017) traces the role of different forms of automation for enhancing long-established inequalities, including housing matching services in Los Angeles and predictive analytics in child protection. Recently, the data justice lab at Cardiff University mapped predictive analytics and citizen scoring in public services (Dencik et al., Citation2018) in the UK. Broomfield and Reutter (Citation2021, Citation2022) are exploring the role of datafication for the public sector in Norway focusing mainly on the implementation site within public agencies.

In this previous engagement with automated decision-making in the welfare sector the perspective of citizens is still broadly absent. If, however, important tasks of sorting, classifying, and risk management are delegated to algorithmic infrastructures, the relationship between the state and the citizen will potentially undergo important changes (DuGay, Citation2005; Lawton, Citation2005; Van der Wal et al., Citation2008; Hintz et al., Citation2019). In certain welfare-state regimes, particularly in the social-democratic model, civil servants have been considered as an important pillar of democracy situated between the state and the citizen. If civil servants are increasingly replaced or constrained by algorithms, this mediating role will change and so will the state-citizen relationship (Tammpuu et al., Citation2022). Klinger and Svensson (Citation2018) have for example discussed the emergence of a new media logic in connection with algorithmic culture. This makes it necessary to explicitly include a citizens’ perspective, thereby linking the automated decision-making of welfare to larger questions of democracy and civic participation. In this article, we examine the attitudes of citizens affected by automated decision-making in the welfare sector in the Baltic Sea region. The assumption is that automated decision-making in public institutions has implications for accountability and trust in societal institutions that form one of the backbones of democratic societies. Hence, it is crucial to not only consider the perspective of employees within the welfare sector but also of the citizens who are exposed to automated decision-making.

Currently, there are only few comparative studies (Kasapoglu & Masso, Citation2021) that analyze the use of algorithms in decision-making across countries. There is a lack of cross-country comparative studies, examining quantitatively the citizens’ readiness and positions to implementation of automated systems. This lack of comparative studies reinforces the idea of digital universalism – namely, that digital technologies are neutral and implemented and used in the same ways across different cultures. However, this is a misleading conception (Kaun & Uldam, Citation2018; Masso et al., Citation2022). While this has been acknowledged by historians of technology (Nye, Citation2007), design studies (Oxman, Citation2017), and science and technology studies (Bowker & Leigh Star, Citation1999), digital universalism remains dominant in e-governance and public administration research. Instead, it is important to acknowledge and radically contextualize automated decision-making across different countries and welfare state models to highlight the cultural, economic, social, and political specificities of automated decision-making. This article follows scholars of science and technology studies (STS) in arguing that practices and attitudes related to technological infrastructures – for example, delegating decision-making to algorithmic systems – vary depending on the political, economic, and cultural context and include different models for organizing welfare provision that are relevant for how automated decision-making is developed and implemented (MacKenzie & Wajcman, Citation1999). The article therefore provides a country comparison of automated decision-making, from the citizens’ point of view.

Welfare ADM in Estonia, Germany, and Sweden

Comparative welfare research has a long tradition. While earlier studies have followed Esping-Andersen’s distinction between different welfare regimes that are developed based on differences in the basic principles on which welfare state policies are implemented including ideas of solidarity and equality as well as on the relationship between welfare provision and the market (Pfau-Effinger, Citation2005), later studies have increasingly included citizens’ attitudes towards welfare policies (e.g., Svallfors, Citation1997; Van Hootegem et al., Citation2021) and consider cultural and ideological factors for the emergence of welfare regimes (e.g., Pfau-Effinger, Citation2005). More current research considers furthermore the connection between attitudes towards technological change and attitudes towards the welfare state. Lim (Citation2020) for example shows that citizens that embrace technological change often also support welfare measures including unemployment insurance. Hence there is an argument to be made for the social embedding of technological innovation as the welfare state might foster positive attitudes towards new technologies. Closer to the topic of ADM, Helberger et al. (Citation2020) have explored attitudes towards fairness and ADM among citizens in the Netherlands. They found that most participants perceived ADM as fairer than human decision-making. However, there are important differences between age groups and educational levels that influence the perception of fairness in AI. Furthermore, respondents also consider the role of programmers for fairness.

In the context of media studies, Lindell et al. (Citation2022) have explored citizen attitudes towards what Mjös et al. (Citation2014) call the media welfare state, namely a specific set of policies that support public services broadcasting and associated values. They have found that there is a welfare state of mind discernible that is closely related to political attitudes of the citizens. Left-leaning individuals are more likely to support the media welfare state than right-wing individuals. They also argue that individual factors however have only a limited explanatory power and structural variables including media and political systems as well as culture need to be taken into consideration.

Mainly focusing on comparing different welfare regimes, in this article we follow Esping-Andersen’s distinction of welfare regimes while also considering the degree of digitalization and technological change in public administration. The countries in focus are Estonia, Germany, and Sweden. Two factors have guided the choice of countries: the way welfare provision is organized and the degree of automated decision-making in the welfare sector (Esping-Andersen, Citation1990; Ferrera, Citation1996; Oorschot et al., Citation2008).

The first factor for the choice of countries studied is based on central welfare-state regimes (Esping-Andersen, Citation1990; Ferrera, Citation1996; Oorschot et al., Citation2008) as well as the representation of post-socialist and ‘old’ democracies. Following and extending Esping-Andersen’s (Citation1990) typology of welfare state regimes, the chosen countries represent (a) a social-democratic welfare state model, namely Sweden; (b) a corporatist-statist welfare state model, namely Germany; and (c) a post-socialist welfare state model, namely Estonia. Following this distinction, social democratic welfare states share a focus on universal support, emphasize equality and have a strong component of decommodification of public services. The corporatist-statist welfare states in contrast are still characterized by a strong stratification between different social classes as the regime has derived from the old guild system. While there have been adjustments to the post-industrial class society, this welfare system is still mainly concerned with the upholding of status differences. Hence, the redistributive aim that is strong in the social-democratic traditions is not very visible. This welfare regime is strongly shaped by religious values that have impacted the distribution of care work within society. Accordingly, family services are often underdeveloped. The post-socialist welfare model has been less explored, but often combines liberal characteristics with a strong focus on competition and the free market with conservative aspects of limited distributive policies. Although recent studies indicate an increasing disappointment with the liberal and free market approach among Estonians (Masso et al., Citation2020). Social policies in post-socialist countries especially Latvia and Estonia have been described as a mix of basic social security and corporatist features (Aidukaite, Citation2006). The kind of ideal types of welfare regimes have of course been criticized extensively. Several additions and nuances have been added to the original three regimes identified by Esping-Andersen. Svallfors (Citation1997) for example distinguishes between social democratic, conservative, liberal, and radical welfare regimes instead. Despite the earlier critique, we use the initially proposed welfare regimes by Esping-Andersen adding the post-Socialist type to explore similarities and differences in attitudes towards ADM in public administration.

The second factor for the choice of countries is the degree of automation. The three countries included in the study differ in terms of the degree of automated decision-making. While Estonia and Sweden are expected to have a high level of automation in the welfare sector and digitalization in general (Charles, Citation2009; Lember, Kattel, & Tonurist, Citation2019), Germany is less advanced in the process but has expressed high ambitions in terms of digitalization for the years to come. At the same time, Germany has a different relationship to questions of privacy and data protection that might impact the degree of automated decision-making in welfare institutions, namely having a longer tradition in public discussions about social norms and values in regard of using population data in public administration (Schmidt & Weichert, Citation2012). Based on the comparative approach, the project updates both knowledge on the Baltic Sea region from a comparative perspective and further develops earlier comparative welfare research. The comparative perspective will not only allow to develop new knowledge on automated-decision making in the public administration of the respective countries, but also on data-based automation more generally as comparative studies highlight contextual specificities while taking-stock of shared developments.

Mirroring these starting points discussed above, we assume that there are different attitudes towards welfare ADM emerging in the three countries – differences that can be related to historical peculiarities when it comes to attitudes towards welfare and social trust. We for example expect different levels of trust in ADM used in the public sector that are related to different levels of social and institutional trust, but also to the level of automation in the public sector. We further assume that the ways in which welfare distribution is organized and implemented as well as the degree of algorithmic automation influences attitudes towards automation of public services.

Data and method

Sample structure

The analysis is based on surveys conducted in Estonia, Germany, and Sweden between 18 October and 9 November 2021 by the market research company Kantar Sifo, which has branches in the studied countries. Utilizing web panels, the questionnaire was sent to a representative sample of the population aged 18–75 in our case countries. Specifically, the survey was sent to 10,118 persons in Estonia, 12,506 persons in Germany, and 6083 persons in Sweden. The response rate was 14.8 per cent in Estonia (N = 1500), 16 per cent in Germany (N = 2001) and 16.4 per cent in Sweden (N = 1000). The Estonian sample was extended to 1500 by the addition of 500 further participants to the original amount of 1000 respondents. This expansion of our sample was undertaken in order to secure representativeness across the two main Estonian population groups, Estonian and Russian speakers. The reported response percentages correspond to overall response rates for online panels (Pedersen & Nielsen, Citation2016) compared to face-to-face surveys (Szolnoki & Hoffmann, Citation2013) as well as in comparison with studies based on paid crowdsourcing respondents (Eklund et al., Citation2019). The average time to complete the web survey was gauged at 15 min.

Inspired by the theoretical perspectives discussed in the previous section of the study at hand, the questionnaire included items on awareness, experience, and attitudes towards ADM, as well as trust scales for different societal institutions, and socio-demographic variables. Thus, we operationalize an ADM welfare state of mind similarly to Lindell et al. (Citation2022) through measuring first awareness; second, risk perception; and third, enthusiasm for future investments in ADM. We further link an ADM welfare state of mind to general trust into societal institutions as this has proven to be an important factor (Lindell et al., Citation2022) ().

Table 1. Sample structure of the survey (%).

Survey items

For awareness, we provided three statements regarding the rights of citizens providing data. Using a three-point scale (1 = Not aware, 2 = Somewhat aware, 3 = Very aware), we asked respondents to gauge the degree to which they were aware of their rights as mentioned in the statements. The statements were:

  • As the person providing data, I have the right to get decisions made by humans

  • As the person providing data, I have to be informed whether a decision was automated or taken by a human

  • As the person providing data, I have the right to attain meaningful information on how a decision was reached

These scales also featured a Cannot say alternative, which was filtered out as missing value for further analysis. Cronbach’s Alpha for the remaining three items was calculated as .835, which is considered satisfactory (Hair, Citation2010). The resulting sum was divided by the number of included measures to create a new Awareness index variable with a scale of 1–3 much like described above.

We provide two different measures of trust that we developed based on previous studies (Kalmus et al. Citation2020; Vihalemm et al. Citation2017). The first gauges respondent trust in a series of societal actors, while the second details the level of trust expressed by respondents in relation to automated decision making. The survey included an item phrased as follows: ‘Generally speaking, how much do you trust automated decision making – that is, decisions made by computer systems in public administration?’ Respondents were asked to grade their level of trust on a five-point Likert scale, where an answer of ‘1’ indicated a lack of trust and an answer of ‘5’ suggested complete trust. details the means and standard deviations for this item per country.

To further contextualize the trust in ADM, a series of items asked the respondents to estimate their level of trust in a number of societal institutions. A five-point Likert scale was used, where an answer of ‘1’ indicated a lack of trust and an answer of ‘5’ suggested complete trust. The questions and scales on trust used in this study were developed and tested in previous studies (Kalmus et al., Citation2020; Vihalemm et al., Citation2017). We constructed an index based on the items testing for trust in the following actors, based on the statistical correlations between the initial variables and relying on the results of prior studies (Kalmus et al., Citation2020; Vihalemm et al., Citation2017): the state, the government, the parliament, the police, the court system, health care services, municipal governments, corporations, daily press, public service broadcasters and commercial broadcasters. Cronbach’s Alpha for these variables was measured at .911, a very satisfactory result following the guidelines discussed previously. The final trust in societal actors index was devised by summing the aforementioned items and dividing the result by the number of items included.

Related to trust is the issue of perceived risk in relation to automated decision making. We included the following item to gauge respondent risk perception in this regard:

Various institutions often use the data that you produce, for example, through your daily activities online and your daily movements, i.e., through the use of social media and your smartphone. What do you think to what extent have the ethics and risks associated with the use of your data by institutions increased in the last five years?

The respondents were asked to grade their level of risk perception on a five-point Likert scale, where an answer of ‘1’ indicated a perceived major decrease of risk and an answer of ‘5’ suggested a perceived major increase of risk.

Finally, our survey included six items designed to gauge the degree to which respondents felt that automated decision making was a suitable administrative technique to employ in different settings. Using a five-point Likert scale (‘1’ = Totally disagree, ‘5’ = Totally agree), we asked respondents to grade their level of agreement of use of automated decision making in state administration, school authorities, municipal administration, health care, predictive police work and citizen scoring or social credit systems (i.e., the ranking of citizens based on a set of variables (Pan, Citation2020)). A Cronbach’s Alpha result of .856 for these six items suggested them to be suitable to be treated as a scale (Hair, Citation2010). The items were then summed up and divided by six, thus creating what we refer to in the following as the suitability index.

Given our cross-country comparative outset, averages of the indices will be compared across Estonia, Germany and Sweden using a series of ANOVAs. This process is described further in the subsequent section.

Results

Focusing on issues of awareness, trust, risk, and enthusiasm in relation to ADM, the subsequent section details the results of these analyses. We focus here on differences between the three countries. The study presented rather shows interesting discrepancies between our three studied countries regarding attitudes towards ADM.

Awareness

When it comes to awareness of ADM systems used in public administration, the highest degree of awareness was reported from the German respondents, resulting in a mean of 2.12 (SD = 0.65). Estonian and Swedish respondents express rather similar levels of awareness as expressed through the index used – indeed, while results from Estonia revealed a mean of 1.6 (SD = 0.6), the average for the Swedish respondents was reported at 1.58 (SD = 0.58). An ANOVA (F(2, 3334) = 298.4, p = <.001) suggested statistically significant differences between the reported means. presents the means and standard deviations for the Awareness index variable as reported per country. Post-hoc testing using Tukey HSD revealed that while the mean reported for Germany was significantly different from the means reported from Estonia and Sweden (sig. < .001 for both countries in comparison with Germany), no significant difference could be discerned between the means reported for Estonia and Sweden (sig. = .876). Thus, Germans report a higher degree of awareness of ADM than both Estonians and Swedes. However, no difference could be discerned in this regard between the two latter countries.

Figure 1. Means and standard deviations of awareness index per country. N of respondents: Estonia = 1137 (75.8% of all Estonian respondents), Germany = 1501 (75% of all German respondents), Sweden = 699 (69.9% of all Swedish respondents). Missing values removed before analysis.

Figure 1. Means and standard deviations of awareness index per country. N of respondents: Estonia = 1137 (75.8% of all Estonian respondents), Germany = 1501 (75% of all German respondents), Sweden = 699 (69.9% of all Swedish respondents). Missing values removed before analysis.

Trust in societal actors

An ANOVA (F(2, 4498) = 12.9, p = <.001) suggested significant mean differences between the three studied countries in terms of trust in societal actors and institutions. Post-hoc testing using Tukey HSD revealed that while the means for Sweden (M = 3.144 [rounded to 3.14 in ], SD = 0.66) and Estonia (M = 3.136 [rounded to 3.14 in ], SD = 0.78) were not significantly different from each other (p = .962), both of these measurements emerged as significantly higher than the mean reported for Germany (M = 3.02, SD = 0.82; p < .001 for both comparisons). As such, we can conclude that while we could not clearly discern between respondents from Estonia and Sweden in this regard, both countries report higher levels of trust in societal actors than found among the German respondents.

Figure 2. Means and standard deviations of trust in societal actors index per country. N of respondents: Estonia = 1500 (100% of all Estonian respondents), Germany = 2001(100% of all Estonian respondents), Sweden = 1000 (100% of all Estonian respondents).

Figure 2. Means and standard deviations of trust in societal actors index per country. N of respondents: Estonia = 1500 (100% of all Estonian respondents), Germany = 2001(100% of all Estonian respondents), Sweden = 1000 (100% of all Estonian respondents).

details the means and standard deviations for this item per country.

Trust in automated decision-making

In comparison to the trust in societal actors, as shown in , the levels of trust in automated decision making emerged as quite similar across the three studied countries – Germany came out on top (M = 2.64, SD = 1.17), with Estonia second (M = 2.57, SD = 1.11) and Swedish respondents expressing the smallest amount of trust (M = 2.49, SD = 0.99) in this regard. These similarities were reflected in the non-significant result of the ANOVA performed (F(2, 3952) = 5.01, p = .006), leading us to the conclusion that when it comes to trust in automated decision making, no significant differences could be discerned between the three countries. While these latter tests did not reach significance, we can nevertheless point to an interesting discrepancy between respondent trust in societal actors (as shown in above) and trust in automated decision making (as shown in ). Employing a series of paired samples t-tests comparing both variables across all three countries, significance levels were reported at < .001 across all performed tests, indicating significant differences for all country-based comparisons between trust in societal actors and trust in ADM. Indeed, while German respondents emerged as expressing the smallest amount of trust towards societal actors, they also express the highest – albeit not significantly highest – level of trust in automated decision making. Also it should be noted when comparing the findings presented in and 3 is that the mean levels of trust in societal actors tend to be slightly higher – just over ‘3’ on a five-part Likert-scale – than the mean levels of trust in automated decision making, which was gauged at under ‘3’ on a similar scale.

Figure 3. Means and standard deviations for trust in automated decision making per country. N of respondents: Estonia = 1322 (88.1% of all Estonian respondents), Germany = 1732 (86.6% of all German respondents), Sweden = 901 (90.1% of all Swedish respondents). Missing values removed before analysis.

Figure 3. Means and standard deviations for trust in automated decision making per country. N of respondents: Estonia = 1322 (88.1% of all Estonian respondents), Germany = 1732 (86.6% of all German respondents), Sweden = 901 (90.1% of all Swedish respondents). Missing values removed before analysis.

Risk perception

Concerning the risk perception connected with ADM in public administration, an ANOVA (F(2, 3949) = 97.6, p = <.001) suggested significant mean differences between the three studied countries. Post-hoc testing using Tukey HSD did indeed reveal that all reported means of perceived risk were statistically significant from each other, suggesting that while Swedish respondents (M = 4.33, SD = 0.73) followed by Estonian respondents (M = 4.19, SD = 0.78) were the most likely to perceive risk in this regard, respondents from Germany (M = 3.88, SD = 0.91) appear to view automated decision making as less of a risk. details the means and standard deviations of perceived risk of automated decision making per country.

Figure 4. Means and standard deviations for perceived risk of automated decision making per country. N of respondents: Estonia = 1381 (92.1% of all Estonian respondents), Germany = 1703 (85.1% of all German respondents), Sweden = 849 (84.9% of all Swedish respondents). Missing values removed before analysis.

Figure 4. Means and standard deviations for perceived risk of automated decision making per country. N of respondents: Estonia = 1381 (92.1% of all Estonian respondents), Germany = 1703 (85.1% of all German respondents), Sweden = 849 (84.9% of all Swedish respondents). Missing values removed before analysis.

Suitability of ADM

Concerning the suitability of ADM in public administration, an ANOVA (F(2, 3333) = 68.56, p = <.001) suggested significant mean differences between the three studied countries with regards to our enthusiasm index. Post-hoc testing using Tukey HSD showed that while the means for Germany (M = 2.22, SD = 0.73) and Sweden (M = 2.20, SD = 0.61) were not significantly different from each other (p = .995), the mean reported for Estonia (M = 2.51, SD = 0.67) emerged as significantly different from the measurements reported for both Germany and Sweden (p < .001 in both cases). As such, our Estonian respondents come out on top with regards to their enthusiasm towards ADM, leaving their German and Swedish counterparts in what could be considered as a split runner-up position. presents the means and standard deviations of this index per country.

Figure 5. Means and standard deviations of suitability index per country. N of respondents: Estonia = 1143 (76.2% of all Estonian respondents), Germany = 1531 (76.5% of all German respondents), Sweden = 662 (66.2% of all Swedish respondents). Missing values removed before analysis.

Figure 5. Means and standard deviations of suitability index per country. N of respondents: Estonia = 1143 (76.2% of all Estonian respondents), Germany = 1531 (76.5% of all German respondents), Sweden = 662 (66.2% of all Swedish respondents). Missing values removed before analysis.

Discussion and conclusion

The ANOVAs presented above show that our German respondents express the lowest trust in societal institutions. At the same time, they express the highest trust in automated decision-making, while also expressing the lowest level of perceived risk in relation to ADM. In contrast, the Swedish respondents express the highest trust in societal institutions, the lowest trust in ADM as well as the highest perceived risk of ADM applications. At the same time, our Swedish respondents are less enthusiastic about the future potential of ADM – a trait shared with their German counterparts. Estonians, then, take a middle position between the Germans and Swedes both in terms of trust in societal institutions, trust in ADM and perceived risks. They are at the same time the most enthusiastic about ADM compared to the German and Swedish respondents ().

Table 2. Summary of findings of the ANOVA.

Earlier studies in comparative welfare research have emphasized the need to combine structural variables such as welfare regimes with individual factors to explain differences in attitudes, hence we focused primarily on country comparisons rather than differences between individuals (Pfau-Effinger, Citation2005; Svallfors, Citation1997; Citation2004). In our material, we have seen a stronger explanatory power of welfare regimes. Our study confirms the ambivalent role of technological innovation in public administration. Swedes are less likely to consider ADM as suitable solution while they have a strong trust in social institutions. One explanation for this attitude might be that they value the role of civil servants within public administration as a source of trust. Sweden, representing a social-democratic welfare regime with historically strong and expansive welfare institutions including strong professional identities of civil servants, automation might be perceived as a reduction of the public sector rather than its improvement or further extension. Germans are similarly less enthusiastic about ADM but show a lower degree of social trust. Here we might assume that the lower enthusiasm is not linked to the role and connection with civil servants but to a stronger criticism towards digital technologies more generally as others have found (e.g., Pentzold & Fölsche, Citation2020; Simon & Rieder, Citation2021), due to the relatively longer traditions with public discussions about social norms of implementing the digital solutions in the public sector (Schmidt & Weichert, Citation2012). This might lead to less enthusiasm among German respondents. Furthermore, the corporatist-statist welfare regime is less focused on distributive logics compared to the social-democratic regime. ADM – we assume – is hence perceived by citizens as primarily used for the purposes of controlling and surveilling welfare distribution rather than providing care itself. This potential focus on surveillance and control through ADM systems is reflected in the more critical attitudes towards ADM among German respondents. Estonians are clearly the most enthusiastic for ADM but show lower degrees of social trust. The lower trust in societal institutions in Estonia has been also revealed in prior studies (Masso et al., Citation2020; Männiste & Masso, Citation2018) as a result of rapid social transformations, like the dissolution of the way of life in a Western liberal welfare society and the loss of confidence in the capacity of democratic institutions in the Western world. The relatively supportive general attitudes to ADM in Estonia could be explained by the fact, that the use of algorithmic systems in public administration (Männiste & Masso, Citation2018, Citation2020), similarly to the development of other digital technologies (Tammpuu & Masso, Citation2018) plays a significant role in the identity building and nation branding (Masso et al., Citation2020). Therefore, the results of this study suggest, that in Estonia ADM technology might become a source for renewed trust in public administration that is linked to values of fairness and justice perceived as embedded in and fostered by technology.

Beyond the results presented here, the survey implemented for our study also included a series of sociodemographic variables – age, educational level, gender, and size of city where the respondent lived. In earlier iterations of our analysis, we employed a series of multiple regression analysis to test for the influences of these variables on the index scores reported by the respondents across our three studied countries. The resulting models – which are not included in the study at hand – performed poorly, with adjusted R2 values varying between .012 and .019. Relatedly, out of our included sociodemographic variables only age and education emerged as significant predictors – albeit poorly, with standardized betas close to 0 for all relevant cases. This poor performance led us to refocus our analysis as presented here. Moreover, it has led us to refocus our analytical ideas with regards to possible future analysis. Specifically, given these experiences, it could be suitable for further research into citizen views on ADM to uncover other types of independent variables that might perform better in statistical models. Perhaps qualitative research methods such as interviews can be of service in identifying possible areas of concern among individuals – areas that can then be operationalized as variables in a survey much like the one presented here. Furthermore, future research should develop comparisons between welfare regimes including additional cases to the three regimes studied here. This would allow for the comparison of variances within and between welfare regimes. Given the differences we have uncovered on the macro level – between the countries under scrutiny – one suitable way forward could be to try and construct further variables on the same analytical level, teasing out further country-based differences. We hope that the results presented here can be helpful in designing such research endeavors.

Our findings indicate that despite attempts to common guidelines and regulations on the EU level (e.g., the GDPR) that attempt to formulate shared norms and grounds for data collection and ADM, there is a translation process from supra-national level to local levels taking shape. This translation process is shaped by local public understanding of data and ADM in the welfare context. It hence could be suggested that regulations need to be translated into the local context as well.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Östersjöstiftelsen [grant number S2-20-0007].

Notes on contributors

Anne Kaun

Anne Kaun is Professor at the Department of Media and Communication Studies, Södertörn University, Sweden, and a Wallenberg Academy Fellow studying the democratic implications of automated decisions making, artificial intelligence and digitalization more generally in the welfare sector.

Anders Olof Larsson

Anders Olof Larsson is Professor at the Department of Communication, Kristiania University College, Norway. He is studying the use of online interactivity and social media by societal institutions and their audiences, journalism studies, political communication and methodology, especially quantitative and computational methods.

Anu Masso

Anu Masso is Associate Professor at the Ragnar Nurkse Department of Innovation and Governance, Tallinn University of Technology, Estonia. Her research interests include big data, social datafication, spatial mobility, social diversity, algorithmic governance, data justice, and research methods.

References

  • Aidukaite, J. (2006). The formation of social insurance institutions of the Baltic States in the post-socialist era. Journal of European Social Policy, 16(3), 259–270. https://doi.org/10.1177/0958928706065597
  • Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology and Human Values, 41(1), 93–117.
  • Andersen, D. F., & Dawes, S. (1991). Government information management: A primer and casebook. Prentice Hall.
  • Andreassen, R., Kaun, A., & Nikunen, K. (2021). Fostering the data welfare state: A Nordic perspective on datafication. Nordicom Review, 42(2), 207–223.
  • Bowker, G., Leigh Star, S. (1999). Sorting things out: Classification and its consequences. MIT Press.
  • Broomfield, H., & Reutter, L. (2022). In search of the citizen in the datafication of public administration. Big Data & Society, 9(1), 20539517221089302. https://doi.org/10.1177/20539517221089302
  • Broomfield, H., & Reutter, L. M. (2021). Towards a data-driven public administration: An empirical analysis of nascent phase implementation. Scandinavian Journal of Public Administration, 25(2), 73–97. https://doi.org/10.58235/sjpa.v25i2.7117
  • Brown, M. M. (2007). Understanding e-government benefits: An examination of leading-edge local governments. The American Review of Public Administration, 37(2).
  • Buckingham, D. (2000). The making of citizens. Young people, news, and politics. Routledge.
  • Castles, F. G. (2004). The future of the welfare state: Crisis myths and crisis realities. OUP.
  • Charles, A. (2009). The electronic state: Estonia's new media revolution. Journal of Contemporary European Research, 5(1), 97–113. https://doi.org/10.30950/jcer.v5i1.122
  • Conway, M. (2000). Political participation in the United States. Congressional Quarterly Inc.
  • Dencik, L. (2022). The datafied welfare state: A perspective from the UK. In A. Hepp, J. Jarke, & L. Kramp (Eds.), New perspectives in critical data studies: The ambivalences of data power (pp. 145–165). Springer International Publishing.
  • Dencik, L., Hintz, A., Redden, J., & Warne, H. (2018). Data scores as governance: Investigating uses of citizen scoring in public services. Retrieved from Cardiff.
  • Dencik, L., & Kaun, A. (2020). Datafication and the welfare state. Global Perspectives, 1(1), 12912. https://doi.org/10.1525/gp.2020.12912
  • Dreyer, S, & Schulz, W. (2019). The general data protection regulation and automated decision-making: Will it deliver? Retrieved from Gütersloh.
  • DuGay, P. (2005). The values of bureaucracy. Oxford University Press.
  • Edmiston, K. D. (2003). State and local e-government: Prospects and challenges. The American Review of Public Administration, 33(1), 20–45.
  • Eklund, L., Stamm, I., & Liebermann, W. K. (2019). The crowd in crowdsourcing: Crowdsourcing as a pragmatic research method. First Monday, 24, 10. https://doi.org/10.5210/fm.v24i10.9206
  • Engin, Z., & Treleaven, P. (2019). Algorithmic government: Automating public services and supporting civil servants in using data science technologies. The Computer Journal, 62(3).
  • Esping-Andersen, G. (1990). The three worlds of welfare capitalism. Polity Press.
  • Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. St Martin's Press.
  • European Council. (2018). Declaration of Cooperation on Artificial Intelligence. Brussels. Retrieved from https://ec.europa.eu/jrc/communities/en/node/1286/document/eu-declaration-cooperation-artificial-intelligence (Accessed 14 October 2019).
  • European Parliament and Council of the European Union. (2016). Regulation on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (Data Protection Directive), L119, 4 May 2016, implementation date 25 May 2018.
  • Ferrera, M. (1996). The 'Southern' model of welfare in social Europe. Journal of European Social Policy, 6(1), 17–37.
  • Gillespie, T. (2016). Algorithm. In B. Peters (Ed.), Digital keywords: A vocabulary of information, society and culture (pp. 18–30). Princeton University Press.
  • Goodman, B, & Flaxman, S. (2016). EU regulations on algorithmic decision-making and a "right to explanation". Paper presented at the ICML Workshop on Human Interpretability in Machine Learning, New York.
  • Hair, J. F. (2010). Multivariate data analysis: A global perspective. Pearson Education.
  • Helberger, N., Araujo, T., & de Vreese, C. H. (2020). Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making. Computer Law & Security Review, 39, 105456. https://doi.org/10.1016/j.clsr.2020.105456
  • Hintz, A., Dencik, L., & Wahl-Jorgensen, K. (2019). Digital citizenship in a datafied society. Polity Press.
  • Ho, A. T. (2002). Reinventing local governments and the e-government initiative. Public Administration Review, 62(4), 434–444.
  • Kalmus, V., Lauristin, M., Opermann, S., & Vihalemm, T. (2020). Researching Estonian transformation: Morphogenetic reflections. Tartu University Press.
  • Kasapoglu, T., & Masso, A. (2021). Attaining security through algorithms: Perspectives of refugees and data experts. In J. B. Wiest (Ed.), Theorizing criminality and policing in the digital media Age (pp. 47–65). Emerald Publishing. (Studies in Media and Communications).
  • Kaun, A., & Uldam, J. (2018). Digital activism: After the hype. New Media and Society, 20, 2099–2106.
  • Kaun, A., & Velkova, J. (2018). Sweden. Retrieved from Berlin.
  • Kayser-Bril, N. (2020). Austria's employment agency rolls out discriminatory algorithm, sees no problem. AlgorithmWatch. Retrieved from: https://algorithmwatch.org/en/story/austrias-employment-agency-ams-rolls-out-discriminatoryalgorithm/ (accessed 11 October 2019).
  • Kennedy, H. (2016). Post, mine, repeat: Social media data mining becomes ordinary. Palgrave Macmillan.
  • Kitchin, R. (2014). Big data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 2053951714528481. https://doi.org/10.1177/2053951714528481
  • Kitchin, R. (2016). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. http://doi.org/10.1080/1369118X.2016.1154087
  • Klinger, U., & Svensson, J. (2018). The end of media logics? On algorithms and agency. New Media & Society, 20(12), 4653–4670. https://doi.org/10.1177/1461444818779750
  • Larasati, Z. W., Yuda, T. K., & Syafa'at, A. R. (2022). Digital welfare state and problem arising: An exploration and future research agenda. International Journal of Sociology and Social Policy, ahead-of-print. https://doi.org/10.1108/IJSSP-05-2022-0122
  • Lawton, A. (2005). Public service ethics in a changing world. Futures, 37(2-3), 231–243.
  • Layne, K., & Lee, J. (2001). Developing fully functional e-government: A four stage model. Government Information Quarterly, 18(2), 122–136.
  • Lember, V., Kattel, R., & Tonurist, P. (2019). Technological capacity in the public sector: The case of Estonia. International Review of Administrative Sciences, 84(2), 214–230.
  • Lim, S. (2020). Embedding technological transformation: The welfare state and citizen attitudes toward technology. European Political Science Review, 12(1), 67–89. https://doi.org/10.1017/S1755773919000341
  • Lindell, J., Jakobsson, P., & Stiernstedt, F. (2022). The media welfare state: A citizen perspective. European Journal of Communication, 37(3), 330–349.
  • Lomborg, S., Kaun, A., & Scott Hansen, S. (2023). Automated decision-making: Toward a people-centred approach. Sociology Compass. https://doi.org/10.1111/soc4.13097.
  • Mackenzie, D., & Wajcman, J. (1999). Introductory essay: The social shaping of technology. In D. MacKenzie & J. Wajcman (Eds.), The social shaping of technology (2nd ed.). Open University Press.
  • Männiste, M., & Masso, A. (2018). The role of institutional trust in Estonians’ privacy concerns. Studies of Transition States and Societies, 10(2), 22–39.
  • Männiste, M., & Masso, A. (2020). ‘Three drops of blood for the devil’: Data pioneers as intermediaries of algorithmic governance ideals. Mediální Studia | Media Studies, 14(1), 55–74.
  • Masso, A., Chukwu, M., & Calzati, S. (2022). (Non)negotiable spaces of algorithmic governance: Perceptions on the ubenwa health app as a ‘relocated’ solution. New Media & Society, 24(4), 845–865. https://doi.org/10.1177/14614448221079027
  • Masso, A., & Kasapoglu, T. (2020). Understanding power positions in a new digital landscape: Perceptions of Syrian refugees and data experts on relocation algorithm. Information, Communication & Society, 23(8), 1203–1219. https://doi.org/10.1080/1369118X.2020.1739731
  • Masso, A., Lauristin, M., Opermann, S., & Kalmus, V. (2020). Masso, Anu; Lauristin, Marju; Opermann, Applying the morphogenetic perspective for the analysis of Estonian social transformations. In Kalmus, Veronika; Lauristin, Marju; Opermann, Signe; Vihalemm, Triin (Ed.). Researching Estonian.
  • Mayer-Schönberger, V, & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. Houghton Mifflin Harcourt.
  • Milner, H. (2002). Civic literacy: How informed citizens make democracy work. University Press of New Engl.
  • Mjös, O. J., Syvertsen, T., Moe, H., & Enli, G. S. (2014). The media welfare state: Nordic media in the digital era. University of Michigan Press.
  • Mosco, V. (2017). Becoming digital: Toward a post-internet society. Emerald Publishing.
  • Nye, D. E. (2007). Technology matters. MIT Press.
  • O'neill, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing.
  • Oorschot, W. v., Opielka, M., & Pfau-Effinger, B. (2008). Culture and welfare state: Values and social policy in comparative perspective. Edward Elgar.
  • Oxman, R. (2017). Thinking difference: Theories and models of parametric design thinking. Design Studies, 52(2017), 4–39. https://doi.org/10.1016/j.destud.2017.06.001
  • Pan, J. (2020). Welfare for autocrats: How social assistance in China cares for its rulers: How social assistance in China cares for its rulers (1st ed.). Oxford University Press.
  • Pedersen, M. J., & Nielsen, C. V. (2016). Improving survey response rates in online panels: Effects of Low-cost incentives and cost-free text appeal interventions. Social Science Computer Review, 34(2), 229–243. https://doi.org/10.1177/0894439314563916
  • Pentzold, C., & Fölsche, L. (2020). Data-driven campaigns in public sensemaking: Discursive positions, contextualization, and maneuvers in American, British, and German debates around computational politics. Communications, 45(s1), 535–559. https://doi.org/10.1515/commun-2019-0125
  • Pfau-Effinger, B. (2005). Culture and welfare state policies: Reflections on a complex interrelation. Journal of Social Policy, 34(1), 3–20. https://doi.org/10.1017/S0047279404008232
  • Putnam, R. D. (2001). Bowling alone: The collapse and revival of American community. Simon and Schuster.
  • Ranerup, A., & Henriksen, H. Z. (2019). Value positions viewed through the lens of automated decision-making: The case of social services. Government Information Quarterly, 36(4), 101377. https://doi.org/10.1016/j.giq.2019.05.004
  • Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. Retrieved from New York: https://ainowinstitute.org/aiareport2018.pdf (accessed 14 October 2019).
  • Reutter, L. (2022). Constraining context: Situating datafication in public administration. New Media & Society, 24(4), 903–921. https://doi.org/10.1177/14614448221079029
  • Rosanvallon, P. (2006). Democracy past and future. Columbia University Press.
  • Rosanvallon, P. (2008). Counter-democracy: Politics in the age of distrust. Cambridge University Press.
  • Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2016). When the algorithm itself is a racist: Diagnosing ethical harm in the basic components of software. International Journal of Communication, 10, 4972–4990.
  • Schiff, D., Jackson Schiff, K., & Pierson, P. (2022). Assessing public value failure in government adoption of artificial intelligence. Public Administration, 100(3), 653–673.
  • Schiølin, K. (2020). Revolutionary dreams: Future essentialism and the sociotechnical imaginary of the fourth industrial revolution in Denmark. Social Studies of Science, 50(4), 542–566.
  • Schmidt, J.-H., & Weichert, T. (2012). Datenschutz: Grundlagen, Entwicklungen und Kontroversen: Vol. Band 1190. bpb, Bundeszentrale für Politische Bildung.
  • Schwab, K. (2016). The fourth industrial revolution. Random House.
  • Simon, J., & Rieder, G. (2021). Trusting the Corona-Warn-App? Contemplations on trust and trustworthiness at the intersection of technology, politics and public debate. European Journal of Communication, 36(4), 334–348. https://doi.org/10.1177/02673231211028377
  • Skocpol, T. (2003). Diminished democracy. From membership to management in American civic life. University of Oklahoma Press.
  • Spielkamp, M. (2018). Automating society: Taking stock of automated decision-making in the EU. Retrieved from Berlin.
  • Sumpter, D. (2018). Outnumbered: From Facebook and Google to fake news and filter-bubbles - the algorithms that control our lives. Bloomsbury.
  • Svallfors, S. (1997). Worlds of welfare and attitudes to redistribution: A comparison of eight western nations. European Sociological Review, 13(3), 283–304. https://doi.org/10.1093/oxfordjournals.esr.a018219
  • Svallfors, S. (2004). Class, attitudes and the welfare state: Sweden in comparative perspective. Social Policy & Administration, 38(2), 119–138. https://doi.org/10.1111/j.1467-9515.2004.00381.x
  • Szolnoki, G., & Hoffmann, D. (2013). Online, face-to-face and telephone surveys—comparing different sampling methods in wine consumer research. Wine Economics and Policy, 2(2), 57–66. https://doi.org/10.1016/j.wep.2013.10.001
  • Tammpuu, P., & Masso, A. (2018). ‘Welcome to the virtual state’: Estonian e-residency and the digitalised state as a commodity. European Journal of Cultural Studies, 21(5), 543–560. https://doi.org/10.1177/1367549417751148
  • Tammpuu, P., Masso, A., Ibrahimi, M., & Abaku, T. (2022). Estonian E-residency and conceptions of platform-based state-individual relationship. Trames: Journal of the Humanities and Social Sciences, 26(1), 3. https://doi.org/10.3176/tr.2022.1.01
  • Van der Wal, Z., De Graaf, G., & Lasthuizen, K. (2008). What's valued most? Similarities and differences between the organisational values of the public and the private sector. Public Administration, 86(2), 465–482. https://doi.org/10.1111/j.1467-9299.2008.00719.x
  • Van Dijck, J. (2014). Datafication, dataism, and dataveillance: Big data between scientific paradigm and ideology. Surveillance & Society, 12, 197–208.
  • Van Hootegem, A., Abts, K., & Meuleman, B. (2021). The welfare state criticism of the losers of modernization: How social experiences of resentment shape populist welfare critique. Acta Sociologica, 64(2), 125–143. https://doi.org/10.1177/0001699321994191
  • Veale, M, & Brass, I. (2019). Administration by algorithm? Public management meets public sector machine learning. In K. Yeung & M. Lodge (Eds.), Administration by algorithm? Public management meets public sector machine learning. Oxford University Press.
  • Velkova, J., & Kaun, A. (2019). Algorithmic resistance: Media practices and the politics of repair. Information, Communication & Society, OnlineFirst. https://doi.org/10.1080/1369118X.2019.1657162.
  • Vihalemm, P., Lauristin, M., Kalmus, V., & Vihalemm, T. (eds.). (2017). Eesti ühiskond kiirenevas ajas. Uuringu ‘Mina. Maailm. Meedia’ 2002-2014 tulemused. [Estonian society and the acceleration of time. Results of the survey ‘Me, the Media, and the World’ 2002-2014]. Tartu University Press.
  • Yeung, K. (2018). Algorithmic government: Towards a New Public Analytics? Paper presented at the Ethical and Social Challenges posed by Artificial Intelligence, Cumberland Lodge, Windsor.
  • Zarsky, T. (2015). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology and Human Values, 41(1), 118–132. https://doi.org/10.1177/0162243915605575