394
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Trust in autonomous vehicles: insights from a Swedish suburb

ORCID Icon, ORCID Icon & ORCID Icon
Article: 2318825 | Received 21 Sep 2022, Accepted 09 Feb 2024, Published online: 01 Mar 2024

ABSTRACT

This paper investigates elements of trust in autonomous vehicles (AVs). We contextualise autonomous vehicles as part of people's everyday settings to extend previous understandings of trust and explore trust in autonomous vehicles in concrete social contexts. We conducted online co-creation workshops with 22 participants, using design probes to explore trust and AVs in relation to people's everyday lives. Using a socio-technical perspective, we show how trust and acceptance depend not only on the underlying AV technology but also – if not more so – on human-to-human relationships and real-life social circumstances. We argue that when investigating issues of trust and automation, the scope of analysis needs to be broadened to include a more complex socio-technical set of (human and non-human) agents, to extend from momentary human-computer interactions to a wider timescale, and be situated in concrete spaces, social networks, and situations.

Introduction

Trust has been recognised as one of the key factors influencing the acceptance of innovative technologies and services (Lüders et al. Citation2017). Responsible innovation (RI) is a field concerned with collaboratively exploring and anticipating the impact and alignment of innovation with societal values (Stilgoe, Owen, and Macnaghten Citation2013; Taebi et al. Citation2014) and fulfilling people's and societal trust (Roy Citation2021). To fulfil organisations', users', or consumers' trust in innovative technologies like autonomous vehicles (AVs), a key concern has become understanding how trust is perceived and experienced.

There is a long tradition of studying people's trust in AVs. Trust is regarded as crucial for the success of automated driving (Henschke Citation2020; Wintersberger and Riener Citation2016). Trust research has been dominated by experimental approaches based on participants being confronted with automation scenarios in laboratory environments (e.g. Large et al. Citation2019; Tenhundfeld et al. Citation2019; Verberne, Ham, and Midden Citation2012) and being asked to self-assess trust through questionnaires (e.g. Gold et al. Citation2015; Mackay et al. Citation2020; Sato et al. Citation2019). This is consistent with a focus on understanding how system performance factors, e.g. usefulness, reliability and predictability, and design features such as appearance, ease of use, and transparency influence trust development. However, little to no research has explored people's trust in AVs in real-world contexts (Raats, Fors, and Pink Citation2020). In turn, this limited understanding of trust makes it challenging to support AV adoption and impact from an RI perspective.

Since automated driving will be experienced in real-life social and cultural contexts, which are inextricably connected to the social construction of trust, it is essential to understand how these contexts influence people's trust in AVs. More specifically, as people experience technology in relation to different situations, people, objects, and their relationships (Hassenzahl Citation2010), the adoption and use of AVs do not occur in isolation. Instead, AVs are part of a socio-technical system consisting, among others, of different drivers and vehicles, bicyclists, pedestrians, institutions, and rules and regulations (Henschke Citation2020). Technology is mixed with social relationships in these socio-technical systems into complex arrangements (Kling Citation2007; Leonardi Citation2012). Trust can also be argued to form the foundation upon which socio-technical systems are built and sustained. From a socio-technical system perspective, trust involves the relationship between individuals and technology and the ethical and societal implications of technology. According to an RI approach, users must trust that innovation will benefit, rather than harm, society as a whole if they are to adopt them (Roy Citation2021). Such insights are essential for the development of AVs to enable RI to leverage value for citizens, industries, and society as a whole and deliver on their trust.

In this paper, inspired notably by contemporary and related research such as Borenstein, Herkert, and Miller (Citation2017) and Henschke (Citation2020), we add to the field of RI by using a socio-technical perspective to explore the design and adoption of future technologies and services, and AVs in particular (Bechtold, Fuchs, and Gudowsky Citation2017; Fisher et al. Citation2015; Hess et al. Citation2021). While many AV pilot studies are founded upon the assumption that the usefulness of automation is self-evident and that exposing the public to the technology will inevitably foster trust and adoption (e.g. Perkins, Dupuis, and Rainwater Citation2018), we propose to examine future uses of trust in AVs through the openness of a socio-technical approach in concrete, contingent, complex social settings. We argue that a pertinent way to develop a deeper understanding of trust in AVs is to investigate how people might engage with these emerging technologies and services in a concrete everyday context. This calls for approaches designed to capture the social, situational, and cultural aspects of trust from real-world environments (Pink et al. Citation2020; Raats, Fors, and Pink Citation2020). Therefore, we investigated trust in AVs based on real-life situations, spaces, relationships, and routines guided by the research question: ‘How do people establish trust in AVs in situated, everyday settings?’. We conducted ethnographically grounded co-creation workshops (Ind and Coates Citation2013). We used probing (Mattelmäki Citation2008) to envision, design, and discuss future AV narratives with local community members in a Swedish suburb. By targeting groups with already established relationships and shared mobility patterns, we could situate reflections on AVs in existing social networks and practices. This allows us to contribute to the field of RI by presenting empirically founded socio-technical elements related to trust in AVs and discussing these from an RI and AV development perspective.

In the following, we first situate our socio-technical perspective in the context of RI and previous studies into trust in AVs before outlining our Design Ethnographic (DE) methodology based on locally situated co-creation workshops. Based on this approach, we show how trust is extricable from complex and contingent social relationships and value systems, how technological innovation can mediate interpersonal trust and how concrete future social opportunities outweigh technical capabilities in supporting trust. We discuss how our key concepts of agents, safety mechanisms and opportunities illustrate the need for a situated approach to automation to align with RI.

Background

RI calls for stakeholders to critically question the potential impact of their solutions on society, to reflect on the role of their assumptions, attitudes and actions in developing these solutions, to engage the public and other stakeholders in the development process, and to respond to changing societal values and needs (Stilgoe, Owen, and Macnaghten Citation2013). This has encouraged scholars to investigate how RI can be activated in technology development. For instance, Raats, Fors, and Ebbesson (Citation2023) contributed to RI by demonstrating how the different RI dimensions can be facilitated through speculative co-creation in the context of future autonomous mobility development. Similarly, Craigon et al. (Citation2023) offered a practical example of activating experts' engagement and reflections on the ethical complexities of digitising food supply chains through design fiction and co-design. An essential aspect of making responsibility explicit in technology innovation is understanding the context these technologies will be part of. This speaks to Webb et al. (Citation2019), who emphasised the importance of an in-depth understanding of context to inform the development of safe interactions between social robots and humans. To develop such an understanding, they engaged with stakeholders involved in the governance, development and use of social robots to learn about their everyday practices. A similar notion was made by Leonard and Tochia (Citation2022), who used ethnography to explore the context of human cleaners and AI-driven cleaning robots sharing responsibility. They aimed to develop insights to inform the design for people's trust in such robots.

A growing body of RI research emphasises trust in relation to ethical and societal dimensions of technological development and the adoption of innovations. For example, Roy (Citation2021) developed an RI framework that integrates trust as one of the core values in technology innovation. She emphasised the benefit of balancing trust and delight to support better adoption and more socially sustainable and ethical innovation. Häußermann et al. (Citation2023) investigated the impact of public participation in municipal innovation projects, e.g. Germany moving towards green hydrogen energy, on trust development. They argued that RI could support the understanding and promotion of social acceptance of new technology. Similarly, Valkenburg et al. (Citation2020) studied innovation governance in rural India. They proposed RI to include local farmers' traditional practical knowledge in the technology development process.

Trust is also one of the most prominent contemporary subjects in AV development. AVs pose new questions and challenges, like how to ensure safety in driverless vehicles, what will happen when an AV breaks down, and whether AVs discriminate between people. AVs have been argued to provide numerous societal and individual benefits (see Litman Citation2021). To enable the adoption of AVs and automated driving, public trust is a crucial concept that needs to be understood (Parasuraman and Miller Citation2004; Wintersberger and Riener Citation2016). Trust concerning AVs has mainly been researched through de-contextualised laboratory-like experiments (Large et al. Citation2019; Tenhundfeld et al. Citation2019; Verberne, Ham, and Midden Citation2012). Participants are brought into an unfamiliar environment and asked to self-assess trust before, during and or after experiencing technology in situations created by the researchers (e.g. Gold et al. Citation2015; Mackay et al. Citation2020; Sato et al. Citation2019); responses to technology are often determined through a set criteria of physiological measurements (Hergeth et al. Citation2016; Waytz, Heafner, and Epley Citation2014; Wintersberger et al. Citation2017). Hence, many studies tend to quantify trust to isolate the impact of distinct characteristics of automated systems on users' perceived trust in the system in confined interaction situations (e.g. Jian, Bisantz, and Drury Citation2000; Muir and Moray Citation1996). Research focussing on experimental settings tends to take an individual approach to trust, relying on the assumption that trust develops in momentary interactions between humans and machines (Pink et al. Citation2020; Raats, Fors, and Pink Citation2020).

However, when applied in real-life settings, vehicle use and trust in AVs are inevitably situated in and informed by social relations, material environments and complex situations. As with other vehicles, use is not limited to a single user. The impact of those relationships is particularly evident in situations of sharing. Academic and industry perspectives alike increasingly argue that sharing AVs will increase the adoption of car-sharing services, which in turn can lead to the anticipated beneficial effects of automation for mobility like a decrease in the number of vehicles on the roads, minimised total travel distances of vehicles, and reduced carbon emission (Hanna et al. Citation2016; Lokhandwala and Cai Citation2018). Research spans from investigating the impact of users' willingness to share on shared autonomous mobility (Dolins et al. Citation2021) to more concrete factors, such as how different amounts of information about fellow passengers influence acceptance of shared AVs (König, Wirth, and Grippenkoven Citation2021). Other examples concern how aspects like vehicle speed and the direction of the passenger's face impact comfort and trust while sharing AVs with strangers (Paddeu, Parkhurst, and Shergold. Citation2020).

A growing body of literature also suggests that studies of AVs' design and future use need to incorporate a socio-technical systems perspective. Borenstein, Herkert, and Miller argue that studies of AVs need to utilise a ‘system level of analysis, including the interactions and effects that these cars will have on one another and on the socio-technical systems in which they are embedded’ (Citation2017, p. 384). Fraedrich and Lenz (Citation2016) argue that AVs need to be studied from a system perspective when exploring the adoption aspects of AVs. The main reason is that existing transport systems are infused with values and norms that will affect individuals and society when deciding whether to accept and trust AVs. Saariluoma, Karvonen, and Rousi (Citation2019) argue for the need to view trust from a system perspective, not just as the interaction between humans and technology. According to the authors, a socio-technical system approach can help explore and predict trust by studying how technology can support people in their everyday lives and become trustworthy. Finally, Henschke (Citation2020) argues that trust will make or break automated driving as it will affect how these systems are adopted and what norms and laws will regulate them. A cohesive socio-technical systems perspective on trust is, hence, crucial if we are to align investigations into AVs with RI principles. To date, however, little to no examples can be found of empirical studies bringing in understandings of people's trust in AVs from real-world contexts.

Research design

To explore how people establish trust in AVs in situated, everyday settings, we designed a study in the context of shared AVs. We situated the study in a real-life space and participants' existing relationships and probed them with the concrete context of shared autonomous pods (SAPs). We conducted co-creation workshops based on participants' current practices and experiences to identify essential elements of trust for relevant future applications as projected by participants, rather than focussing on reactions to existing or probable technologies.

The empirical study was carried out within the Drive Sweden funded project AHA II, which addresses key urban planning challenges around the research and design of future mobility services that target the transport of people and goods. The project aims to deliver a genuine and transferable human-centred approach to exploring and designing realistic future scenarios, prototypes, and mobility as a service (MaaS) concepts and to actively demonstrate how these can be integrated collaboratively into city planning and the development of future mobility solutions.

As part of a larger DE research approach (Pink et al. Citation2022), in this study, we utilised co-creation workshops and probing to explore how trust unfolds through various interpersonal relationships and circumstances that the AVs would be part of. Co-creation is a way to engage stakeholders, e.g. researchers, developers and citizens, to collaboratively generate value and meaning to develop more relevant and useful products and services (Ind and Coates Citation2013; Jansma, Dijkstra, and de Jong Citation2022). Probes are instruments used to investigate new opportunities rather than solve existing problems (Hutchinson et al. Citation2003; Mattelmäki Citation2008). The main strength of probes is that they support interpretative design work as they develop over time (Graham et al. Citation2007). Probes allowed us to elicit open-ended reflections on the role SAPs would or would not play in people's lives (Hutchinson et al. Citation2003; Schulte, Marshall, and Cox Citation2016). Furthermore, the approach allowed for in-depth exploration of (Graham et al. Citation2007; Mattelmäki Citation2008) people's everyday routines and practices, how they organise shared transportation and what concerns and expectations they had for shared AV technologies and services in order to uncover elements that would determine and mediate trust in AVs.

We conducted seven online co-creation workshops with a total of 22 participants from a peri-urban, semi-rural area of Gothenburg, Sweden. Following a snowball principle, we invited participants from an earlier study of interviews and ethnographic drive-alongs to recruit people from their areas with whom they shared neighbourhoods, activities, or had other connections that made everyday mobilities a relevant common subject. Insights from the preceding ethnographic phase of our research were used to facilitate the workshops and extract themes around shared mobilities relevant to each group. We aimed to explore questions of trust in SAPs, taking participants' existing and potential everyday relations and routines as a point of departure. The goal was to connect participants' current real-life situations with future scenarios of using SAPs.

In the workshops, participants aged 15-75 discussed how they would organise a shared mobility service with a group of 2-5 of their friends, neighbours, class- or teammates. Through probing, we engaged the participants in fictional situations (Sena, Cecchinato, and Vines Citation2021) within existing relationships, shared experiences, and grounded the imaginary scenarios in real-life social contexts. The workshops created an online environment where we could discuss trust concerning future AVs. We provided participants with the context of sharing AVs, whereas issues of trust were observed as they were brought up mainly by participants themselves. This allowed us to explore elements of trust in relation to AVs and the construction and negotiation of future shared mobility practices.

Data collection

We used Zoom, Microsoft Teams and (in one case) Google Meetup as the platforms to facilitate and record each workshop according to participants' preferences. We presented a digital map (see ) of the participants' area in an online collaboration platform, Mural, as a way for participants to visualise and co-create the use of SAPs. Two to three researchers facilitated each workshops, introducing new prompts, moderating discussions, and taking notes.

Figure 1. Map drawing from one of the co-creation workshops.

Figure 1. Map drawing from one of the co-creation workshops.

The workshops were organised into four steps, the relative timing of which was adjusted to fit the respective group composition and the emergent themes of the discussion:

(1)

An initial discussion where participants explained their relationships and discussed current mobility and sharing practices and common needs and problems.

(2)

SAP prompts that introduced the idea of having access to a shareable, possibly autonomous vehicle.

(3)

A map activity where participants could draw together on a map of their area.

(4)

Iteration of ideas using emerging themes visualised through a collaboration platform.

After the initial discussions, we presented a scenario of SAPs as an additional mobility choice and asked the participants to think about their everyday life practices and experiences in this new circumstance. Images of AVs and cars were used to prompt reflections about specific qualities (shared) vehicles of the future would need to have, if an autonomous service would be possible and what would be needed in order to trust or appreciate the automation. Participants were asked to discuss ways of using, sharing and integrating the SAP into their existing and future mobility practices. This was followed by a map activity where participants first marked places they inhabited and practised regularly, areas that were difficult from a mobility perspective, e.g. where the roads were too narrow or there was no crossing, and places they had in common. They then identified good places to be picked up and dropped off by SAPs, places that would be difficult for them to navigate, and places where this type of service could bring value, e.g. access to remote locations in the wilderness. In the last step of the workshops, ideas that had been collected and clustered with the help of digital post-its (see ) were then submitted to a final round of collective discussion. This helped participants to introduce new ideas and take a reflexive stance on the preceding discussion. It made points of agreement or variations of the participants' view of trust in AVs more explicit.

Figure 2. Ideas collected and clustered during a co-creation workshop.

Figure 2. Ideas collected and clustered during a co-creation workshop.

Data analysis

All workshop recordings were fully transcribed and systematically analysed using a qualitative content analysis method (Schreier Citation2012) and ATLAS.ti software. Systematic coding concentrated on socio-technical elements that appeared in participants' envisioning and anticipating AVs as part of their lives. Inspired by an abductive approach, the coding used elements of inductive and deductive reasoning in two iterations. The first iteration was based on an open coding approach, where empirical excerpts were labelled based on codes such as ‘safety’, ‘booked time’ and ‘autonomy’ each time they were mentioned in a transcript. These codes then were sorted into categories that helped to structure and, later, sort the data into emerging themes (see for an example of how the codes are organised and related to the categories and themes). A second iteration of coding was then conducted using trust theories of Hoff and Bashir (Citation2015) and Mayer, Davis, and Schoorman (Citation1995) as a lens. In the second iteration, the researchers paid particular attention to how trust (or lack thereof) seemed to shape the participants' narratives and perceptions of AVs. As such, the theoretical lens helped uncover nuances in the data related to trust. The data analysis resulted in 10 categories that revealed three main themes of socio-technical elements that mediate trust in AVs, which structure the presentation of our results below. First, trusting ‘others’, stresses the importance of interpersonal trust in the development of peoples' trust and perception of AVs. Second, technologies for sharing automation, elaborates on the technologies that compensate and support the social setting of sharing driverless vehicles. Third, future opportunities in everyday life, describes the balance between the anticipated delight and trust in AVs when envisioned as part of concrete everyday situations. The themes are further reported in the findings section through their underlying categories.

Table 1. Example of how codes relate to categories and themes.

Limitations

Like any research approach, using probes and qualitative content analysis in exploratory studies has some limitations. Using probes may result in sample bias, as participants who are comfortable expressing themselves might take more room in group discussions than individuals who are less verbally articulate. To mitigate the risk of sample bias, the researchers actively facilitated conversations that encouraged all participants to share their viewpoints and reflections. Probes and qualitative content analysis may also be prone to problems related to subjectivity and interpretation. As the probes relied on open-ended questions, the participants needed to interpret and react based on their reference points and earlier experiences. To address these limitations, the researchers asked follow-up questions and were active in the discussions to mitigate the risk of misinterpretation. Furthermore, all authors were involved in the qualitative content analysis and addressed the implications of subjectivity by using cross-comparison of analysed data. Finally, in an exploratory qualitative study, findings are context-specific, which poses certain limits to their generalisation. However, rather than aiming for, e.g. statistical generalisation, rich qualitative data in these studies can be used to generalise to concepts, to theory, to specific implications, or to insights (Walsham Citation1995).

Findings

By considering AVs in situated, everyday settings, our approach revealed how trust in – and relevance of – AVs is conditioned and mediated by trust relationships with ‘others’, additional technologies that support sharing automation, as well as the anticipated future opportunities of automation in everyday life. Below, these findings are presented according to conventional ethnographic practice, based on the analysis and demonstrated through examples from the workshop materials (Merriam and Tisdell Citation2015).

Trusting ‘others’

Anticipated trust in AVs was most often thought of based on interpersonal trust. Participants placed collective judgement and experience above their perceptions of the inherent technical qualities of the machine. Several participants pointed out that if they saw other people using the vehicle and service in question, they might be encouraged to use it. This suggests that collectively accumulated experience was prioritised over abstract or objectified indicators of technology performance, but also that the presence of others normalised the experiences, shifting the centre of trust away from individual human-machine interaction.

Silvia:

‘Yeah, if a lot of people are sitting and driving there [using a SAP], I will think it is not so dangerous, and then I will just get over it [not trusting it], I think….’

Alma:

‘I think I would trust it more if I knew someone who used the car as my friend told me it was safe or something.’

Perceptions of AVs appeared inseparable from established understandings of interpersonal trust and representations of social relations. Participants almost systematically thought about the machine by thinking about humans, social relations and existing mobility systems. For example, from an AV performance perspective, participants reflected on their abilities, like handling certain road conditions and situations better or worse than AVs. The bus was a frequently employed metaphor for trusting other transport modes. More particularly, the bus driver as a human agent was compared to the potential AV technology.

While, for some, the habit of trusting an unknown human driver served as a rationalisation for trusting AVs, the comparison with the bus was employed to gauge the potential of a (collective) AV service. Many participants thought, for example, that as their kids were used to taking a bus, they should also be allowed to use SAP alone. Some suggested that AVs would have to be measured against their capacity to be more reliable and predictable than human drivers. Several participants mentioned how taxis seemed unsafe due to unknown drivers, despite generally not reporting any negative experiences with taxis when prompted.

Helle:

‘I think it's the same thing with the bus driver, actually. He could fall asleep or be drunk or whatever when he's driving the bus with our kids. So probably it's safer with a car without a driver.’

For several participants, buses were considered safer and more reassuring due to a human driver whose presence would be familiar and who would be able to deal with contingencies and mediate interpersonal uncertainties.

Silvia:

‘Yeah, and I think it's just safer when there's someone driving the bus because if you sit in the bus with a stranger and there's no one driving the bus, it feels like more….’

Gracy:

‘Creepy.’

All:

‘Yeah.’

Other participants directly addressed human drivers and other passengers as sources of safety and as additional help to their mobilities. For instance, participants discussed the question of the implicit reliance on other passengers to intervene in cases of conflict or aggression or on the bus driver to contribute to supervising children's behaviour (such as noticing if they did not leave the bus at the school).

Johanna:

‘In a bus where you have a bus driver, and you have at least a few more people around you and stuff like that. Whereas, if you sit in a car, and maybe it's like six people, depending on how many people can fit into the car, it is a little bit different because, on a bus, you could get up from your seat and move away. But if you're sitting in a car next to somebody, there's nowhere to go.’

A group of teenage participants stressed how they tended to sit close to the bus driver or people their age as a precaution. The physical space inside vehicles was an essential element in this aspect, the bus configuration allowing passengers to choose the safest place to sit, whereas, in a smaller ‘pod’, one could be stuck sitting next to an unpleasant passenger.

Silvia:

‘And you feel safer if you sit near the bus driver, it is like their job, and I think it will be safer because of that.’

Pernilla:

‘He or she [the driver] hears everything the person [a stranger] tells you or talks to you.’

Bella:

‘Yeah, he or she can throw the person off the bus.’

Alma:

‘Yeah, or someone you feel like can help you on the bus, like someone maybe your age, you can get help from.’

Gracy:

‘Yeah.’

Hence, trust in other people was often relied upon when reflecting on possible trust experiences with AVs, to the extent that it appeared challenging to communicate about trust in AVs outside of social relationships either as a condition or a comparison. When placed into the context of sharing automated vehicles, trust in other human agents became the central issue and the pivotal condition for anticipating using the SAP. Trusting others became a prevalent issue when talking about sharing with strangers. Some participants were worried about harassment, drunk passengers or someone damaging the interior of SAPs.

Kerstin:

‘Well, when I think of risk scenarios, I don't think anything about the driving or the traffic because I already see traffic as dangerous. I see being stuck in this small pod with someone who's trying to touch me or is just being weird, or a stench of alcohol ….’

This threat was especially relevant in discussions around children using SAP services independently. Many of the parents thought it acceptable for children to share SAPs with people they knew or with strangers from the local area. However, they felt suspicious about sharing with strangers from outside their area.

Transporting children also served as a key example of how trust in others conditioned the potential usefulness of AVs. Participants expressed hesitations about trusting their children to adopt appropriate behaviours concerning the vehicle, other passengers, and their travels. Several groups elaborated on the need to be sure children did exit the vehicle at their expected destination. Others worried about their children acting respectfully towards other passengers. One group contrasted the way their older daughters and their younger sons tended to use public transportation. They described how their daughters independently and efficiently navigated different transport options depending on varying safety, comfort and efficiency considerations, e.g. avoiding certain groups of passengers from other schools. In contrast, they claimed their sons were too preoccupied with their phones to be trusted to even cross busy roads safely and often missed their stops which led parents to drive them to their activities as they could not be trusted to go by themselves yet. Sharing AVs thus also functioned as a canvas to reaffirm categorisations of mobility competence.

In summary, interpersonal trust was identified as playing an important role in evaluating and developing trust in AVs and SAP systems. Technological performances of AVs got much less attention when discussing potentially trusting or not trusting these systems, especially when placed into real-life use contexts and even more so in the context of sharing.

Technologies for sharing automation

As the focus shifted from technical features making the AV technology more trustworthy to questions of social settings and interpersonal trust, participants proposed a set of technologies and organisation solutions to influence social interactions and relations between agents around the shared autonomous pods. In addition to technologies compensating for the automation, they foregrounded a series of technologies to support sharing.

Some of these features compensated explicitly for the automation itself or the absence of a human driver. Participants suggested a communication interface allowing passengers to exchange with a (human) system responsible via a screen if necessary and emergency stop buttons that would safely halt the vehicle should unforeseen or dangerous situations arise. Technologies supporting sharing were seen as having to mesh with – rather than replace – human agents who ordinarily regulate shared modes of transportation. For instance, some participants were concerned with what would happen if the vehicle broke down completely.

Tove:

‘I think you have to trust it. You have to trust that it works. And if it doesn't work, you must trust that someone comes to help you or that there's an alarm system or something.’

The worries about the dangers of sharing with strangers brought up discussions on different ways of mitigating these risks through a series of safety measures. For example, participants suggested solutions like geofencing the vehicles, allowing SAPs to only travel between two specific points, enabling exclusive booking for a dedicated time, creating sharing communities, identifying the passengers upon entry, collecting data for backtracking, and installing security cameras and emergency buttons for stopping the vehicle whenever the passengers wanted.

Laura:

‘I think it would be more for me. It would be more of a safety issue. I would feel safer for my kids if there's a way to identify you when you go in a car, and you know that these people have logged into the system, they're using it, and we know who they are and yeah….’

Experience with other public modes of transportation served as a point of comparison to normalise both the shared use of vehicle space (including children's independent use) and the presence of surveillance technologies. Participants drew inspiration from regularly experienced surveillance technologies to inform their ideas about regulating the shared use of the vehicle space.

Kerstin:

‘Yeah, exactly. I let them ride the bus. I let them ride the tram. We lived in Biskopsgården, and I guess the self-driving cars would have CCTV just like a tram. So, if something happens, I hope it can stop it. I think I would expect it. But I would expect the materials to be continuously destroyed unless something happens. Like you need the police, you need orders to open the files up if something happens.’

Comparison with public transport also raised the recurrent theme of how the public qualities of vehicle space influenced the feeling of safety. For buses/trams, the presence of other passengers and a human driver was pointed out as an inbuilt safety feature that participants used in adaptive ways, such as when underage girls reported sitting close to the driver or passengers their age to feel safe.

Participants not only suggested surveillance and control methods but also used predefined designated sharing groups to delineate the scope of sharing, thus mitigating the weight of personal trust in adopting the SAP. Specifically for their children's activities, custom-defining ranges, routes, and possible passengers were imagined as additional safety features. Some of these were identified not only to protect children from external threats but also to influence their behaviour remotely.

Kerstin:

‘Maybe it could be for the activities that this car is connected to them. So, this could just pick up kids going to innebandy [floorball] or to scouts. This time, this date, it's not going to pick up any adults.’

In general, when situated in concrete social contexts, safety considerations of AVs were not targeted solely on the underlying technology powering AVs, but at least as much, if not more, the people SAPs would be shared with. This indicates the importance for AV design and development to integrate technologies besides the vehicles that cater for the social context of sharing driver-less vehicles.

Future opportunities in everyday life

A turning point in discussions regarding perceptions of relevance and trustworthiness of SAPs in many workshops was when participants began to envision concrete changes SAPs could bring to their everyday situations. This shifted the focus from thinking about the safety aspects to thinking of the possibilities SAPs could bring. While qualities of the vehicle and system were necessary conditions for sharing, the pivotal moment for many participants was to identify opportunities provided by a SAP as an additional mode of mobility in their everyday lives. Rather than functional usefulness of AVs being self-evident, participants collaboratively imagined how appropriately developed systems could expand social roles and relationships. For example, kids being more autonomous to attend activities that they otherwise couldn't or access destinations outside the local area more easily, people with restricted mobility re-connecting with local communities, parents freeing up time from driving their kids around and fostering local tourism by taking tourists to places not easily discoverable or remote.

Freja:

‘Yes, she [a participant speaking in Swedish] says that they [she and her partner] would probably work longer hours, and it would be less stressful if she did not have to finish work earlier because she had to pick up her children or drive them somewhere. Also, in bad winter weather, with a lot of snow on the road, it's not always so fun to walk. As Kerstin said, there are also many elderly people around here who would probably use that [SAP service], I guess.’

AV opportunities were anticipated not only regarding being transported from one place to another or relieved from everyday responsibilities but also as spaces of sociability. For example, participants imagined extending social events like going to the pub or church gatherings by meeting their friends already in an SAP. For example, while a group of teenagers suggested the more commonly promoted advantages of freeing up time for other simultaneous activities, they especially stressed the value of seamlessly continuing social gatherings – like a game played by their football team – through shared transportation adapted to their activities.

Alma:

‘You can do a lot of things while the car is still driving.’

Poppy:

‘Yes.’

Bella:

‘Because you are not allowed to talk on the phone while you're driving. But if you have self-driving cars, then you can do that.’

Pernilla:

‘Yeah, you can play games and watch movies and others.’ Poppy: ‘Do work.’

All:

‘Yeah.’

Alma:

‘Do your homework on the way to school.’

Silvia:

‘You can have a party in the car.’

Pernilla:

‘You can eat while you're going.’

Alma:

‘Maybe if you wake up too late and you have to go to school, then you can eat your breakfast in the car.’

The opportunities offered were, however, seen as dependent on certain characteristics of the service. For the service to offer additional autonomy and accessibility for children and elderly travellers, regular rounds that did not require a booking or non-digital solutions were proposed. The vehicles themselves were described as having to accommodate respectively for easy access, transporting equipment, animals, bikes, and other goods, and interaction. In order to integrate the service with existing communities, booking recurring trips was deemed essential; pricing was discussed as having to be accessible and appropriate according to different types of use. Having reliable occasional access to utility vehicles or larger cars was considered an incentive to downsize individually owned vehicles.

It must be stressed that while participants identified numerous important opportunities offered by an improved additional mobility service that would consider their ideas, the automation itself appeared secondary in many of these considerations. However, anticipated concrete opportunities appeared to somewhat compensate for some of the prohibitive effects of automation.

Overall, anticipating concrete opportunities with AVs moved the discussions beyond worries about safety to considering the likelihood of AVs being used and of use in real-life situations. While considerations about interpersonal trust and overall safety and efficiency still conditioned these opportunities, thinking about possibilities offered by the prospective service appeared to make participants more inclined to surpass certain hesitations regarding the vehicle. The envisioned benefits inform about peoples' values and anticipations of AVs, which are important to understand when designing such systems.

Discussion

This study demonstrates the importance of viewing trust in situated, everyday settings. In the case of AVs, the identified socio-technical elements of trust will impact how we understand and design for ethically and societally important innovations. Hence, a socio-technical perspective on trust may guide RI in emphasising ethical, social, and environmental considerations. As such, trust is an essential aspect of RI to enable the adoption of products and services that benefit society while minimising negative impacts and risks (Roy Citation2021; Stilgoe, Owen, and Macnaghten Citation2013).

Although theories of trust in AV have acknowledged that previous and current experiences with automation influence trust (Hoff and Bashir Citation2015), most empirical studies have focussed on how trust evolves between humans interacting with the underlying AV technology. The studies have neglected the broader social contexts that AVs would be part of. However, as Leonard and Tochia (Citation2022) and Webb et al. (Citation2019) emphasised, to make responsibility explicit in innovative technology development like AVs, it is crucial to develop a deep understanding of the context. Therefore, we extended these understandings of trust in AVs by acknowledging AVs as part of situated, everyday settings.

In our SAP example, people discuss AVs as part of their real-life space, situations, and relationships. Rather than individual and reactive, participants' perspectives on trust were collectively constructed. In doing so, we broadened the spectrum of analysis from the interaction-based determinants to a situated system perspective and the timescale from the immediate use context to broader collective trajectories while also situating the discussion in concrete socio-spatial contexts. Thus, contextualising AVs allowed us to explore trust from a system perspective and identify three elements central to anticipating trust in AVs – agents, safety mechanisms, and opportunities. Even if these elements are context-specific to AVs, they act as examples of outcomes and findings from our approach that may be important to consider when designing for a balance between promoting technological advancement and ensuring that an innovation aligns with individual and societal values, ethics, and sustainability goals.

Agents

As mentioned above, most empirical research on human trust in AVs has investigated how trust can be developed through the AV technology's system performance and design features (Raats, Fors, and Pink Citation2020). It has been assumed that AVs can be understood as technological agents and that trust depends on how these agents are designed. In addition, some investigations have abandoned the understanding in trust theory that the development of trust depends not only on short interactions but also on groups of people and their past experiences (Hoff and Bashir Citation2015). Centring on the technology has neglected the broader context where these interactions occur and other agents that can influence how trust in AVs develops. Looking at AVs as part of a socio-technical system, we investigated elements of trust in the context of people's lives. This enabled us to uncover the role of non-technical agents in trust in AV development. For example, investigating trust in AVs in situated, everyday settings allowed us to discuss and discover participants' concerns towards the presence and absence of other people in driverless vehicles and how they would want to experience sharing with strangers. Elaborating on how AV technology would perform in their local area, participants' discourse demonstrated the importance of various heterogeneous human and non-human agents and how their perceived agency in the implementation process affected potential trust development. This indicates that when developing AVs, we should not only concern ourselves with AVs as technological agents that influence trust but view these technologies as part of a broader, complex and contingent socio-technical context where a wider range of human and non-human agents play a role in the perceived trustworthiness and usefulness.

This implies that to elevate RI in AV development, the developers need to partake in activities where contextual insights are uncovered and produced. Insights that offer social perspectives to complement the technological aspects of AV development. To achieve this, the developers would gain from, for instance, more inclusion and collaboration with stakeholders experienced in conducting user research. The benefit of a better understanding of the context and people's experiences with technology was also demonstrated by Leonard and Tochia (Citation2022), who studied human cleaners' sharing responsibility with cleaning robots. Through research into users' context, they uncovered that the trust relationships between the cleaners and robots involved complex social and relational processes and not solely the underlying technology of the robot vacuum cleaners. Furthermore, this resonates with Webb et al. (Citation2019), who emphasised the importance of in-depth knowledge of the context technology would partake in to foster societal trust in technology development. In summary, approaches designed to elicit contextual understanding can facilitate RI by informing AV development with the inclusion of contextual knowledge, e.g. what other solutions besides AV technologies are needed to foster safety and comfort and align AVs with societal values (Taebi et al. Citation2014).

Safety mechanisms

As seen above, investigating trust from a socio-technical system perspective demonstrated the importance of human agents and their social relations in trust development. Consequently, while previous research has focussed on AV technology as the primary safety mechanism encouraging users to trust AVs, we expanded the understanding of how trust develops to include situated discussions within a broader socio-technical system. As described in our findings, workshop participants talked extensively about how the presence of other people would influence their trust in AVs, showing that future systems must address trust in other human agents. Previous work has mainly concerned technical safety mechanisms of the underlying AV technology, like feedback modalities, vehicle driving styles, and system reliability aspects (see Raats, Fors, and Pink Citation2020), without placing AVs into real-life contexts. Investigating trust in situated, everyday settings allowed us to identify ideas about different safety measures targeted more towards other (human) agents. Workshop participants believed this would make them feel comfortable and safe, like geofencing, creating sharing communities, and enabling exclusive booking. This indicates that when developing future mobility, it is not enough to focus merely on the technical capabilities of AVs to influence people's trust through performance; rather, trust must be understood based on a broader social context.

This implies two things. First, to make RI explicit in AV development, the development needs to involve stakeholders trained in, e.g. approaches to elicit what would make people feel safe and comfortable while using AVs besides the underlying AV technology. This could then inform what other stakeholders like algorithm designers or urban planners, should be included in the AV development to create solutions for these insights. Furthermore, this can elicit aspects of AV development that are otherwise overlooked. This resonates with the work by Craigon et al. (Citation2023), who involved experts in digitising the food industry, specifically in applying artificial intelligence to automate data sharing in the food supply chain. Engaging a group of experts elicited several ethical concerns with automating data sharing and contextual aspects of trust in such systems. Second, developers could benefit from employing practices that help them to continually reflect on the assumptions and attitudes they possess and that affect the AV development. For instance, the results show, they currently assume people's trust and feeling of safety is attributed solely to the underlying AV technology.

Opportunities

Trust in itself is harmless and does not put a person at risk. The willingness to act on the feeling of trust can do so. Risk is a feeling that involves an assessment of losses and gains (Mayer, Davis, and Schoorman Citation1995). In empirical studies of trust in AVs, the risk is commonly attributed to technological capabilities. Most of the development and research on AVs works under the premise that AVs will reduce traffic accidents, reduce the need for people to drive their families around, and be more sustainable due to less congestion (Litman Citation2021). These high-level goals support previous empirical studies on trust in AVs that do not contextualise them enough to study their real-life benefits. Investigating trust in AVs from a situated, everyday settings perspective bridged that gap and directed our attention to less explored aspects like the opportunities AVs can bring to people's everyday social lives. This opened to exploring AVs as part of a system where they might offer long-term value in concrete, real-life situations.

Stilgoe, Owen, and Macnaghten (Citation2013) calls for RI to respond to societal values and needs. For AV development, this means a balance between safety and the benefits AV can bring to people's daily lives. In their work Roy (Citation2021) described a need to balance delight and trust. This study complements that and zooms into trust. It argues that AV development needs to balance its focus between risk and opportunity. Focusing solely on solving the issue of the risk of AVs causing harm on the roads will not prepare society for adopting AVs. It also does not mean developers and designers should develop use cases for AVs out of thin air. It means there needs to be an effort to understand people's daily routines, practices and anticipations of AVs and come up with solutions that cater for those insights. This, in turn, means a tighter collaboration with stakeholders trained in producing such insights. Furthermore, for AVs to be accepted and used, these systems must be designed to be relevant to people's everyday lives and studied in such contexts to create opportunities leveraging value for people and society.

This paper explores how people establish trust in AVs in situated, everyday settings. Using a DE approach, we show how trust unfolds through various interpersonal relationships and circumstances that the AVs would be part of. Based on these insights, we shift the analysis of trust beyond interaction-based determinants towards situated socio-technical systems and from the immediate use context to broader individual and collective timescales. By doing so, our study exemplifies that socio-technical elements such as diverse agents, safety mechanisms, and opportunities mediate trust in AVs. While these elements are specific to the context of AVs, they are illustrative examples of the types of results and discoveries based on our DE approach. As such, they demonstrate how to balance technological opportunities with RI principles based on ethical, societal, and sustainability values for individuals and communities. Furthermore, future AV development should account for and cater to the complexity and diversity of the socio-technical context of real-life implementations.

Conclusion

This paper explores the research question, ‘How do people establish trust in AVs in situated, everyday settings?’ The study uncovered three socio-technical elements that articulate trust in our empirical context. As such, they are examples of how investigating people's trust in AVs in situated, everyday settings can create insights valuable for RI. Using a DE approach, we showed that trust in AVs is not a one-dimensional concept but a complex interplay of elements involving different agents and contingencies. These agents may encompass the technology, regulatory bodies, manufacturers, service providers, fellow passengers, and broader societal context. By viewing AVs from a system perspective, we argue that AVs as RI, where people genuinely feel safe and comfortable, necessitate more than technological solutions. While technology plays an essential role, it is equally important to consider the social and human aspects that shape trust. Hence, our example shows the need for a comprehensive RI strategy that supports the development of future technologies and policies, infrastructure, and practices that cater to trust-related concerns expressed by individuals and communities in a dynamic and evolving mobility landscape.

The study findings imply that to truly elevate RI in AV development, the development needs to engage in more activities where insights about the use context and people's anticipations of AVs are produced. The development needs to focus not solely on mitigating the technological risks of AVs but also consider how they could bring actual value to people's lives. Furthermore, there is a need to include stakeholders trained in eliciting people's needs to bring user insights into the development process. Furthermore, AV development needs to engage with stakeholders and other actors who can complement AV development with developing essential solutions for people to feel safe and comfortable with AVs.

By considering and designing AVs based on a socio-technical approach, designers may need to consider what perspectives and competencies that still need to be added to the design and development teams. Hence, it might be necessary to enable the involvement of heterogeneous sets of stakeholders in the development process. A way forward to support RI could be to foster collaborations between citizens, industry, government, and academic stakeholders to address socio-technical perspectives in future mobility solutions.

There is an opportunity for further research to explore what other socio-technical elements mediate trust in automation, where and when these elements of trust are considered in AV development, and how people-centric knowledge can be operationalised for RI, e.g. to stage co-creation. These questions can further expand the understanding of trust from peoples' everyday life context, provide insights into the social considerations of technology innovation, and offer practical experiences of how to facilitate RI.

Supplemental material

rotating.sty

Download (5.5 KB)

epsfig.sty

Download (3 KB)

interact.cls

Download (23.8 KB)

booktabs.sty

Download (6.2 KB)

subfigure.sty

Download (14.1 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was funded by Sweden's Innovation Agency VINNOVA [grant number 2019-04786].

References

  • Bechtold, Ulrike, Daniela Fuchs, and Niklas Gudowsky. 2017. “Imagining Socio-Technical Futures – Challenges and Opportunities for Technology Assessment.” Journal of Responsible Innovation 4 (2): 85–99. https://doi.org/10.1080/23299460.2017.1364617.
  • Borenstein, Jason, Joseph R. Herkert, and Keith W. Miller. 2017. “Self-Driving Cars and Engineering Ethics: The Need for a System Level Analysis.” Science and Engineering Ethics 25 (2): 383–398. https://doi.org/10.1007/s11948-017-0006-0.
  • Craigon, Peter. J., Justin Sacks, Steven Brewer, Jeremy Frey, Anabel Gutierrez, Naomi Jacobs, Samantha Kanza, et al. 2023. “Ethics by Design: Responsible Research & Innovation for AI in the Food Sector.” Journal of Responsible Technology 13:100051. https://doi.org/10.1016/j.jrt.2022.100051.
  • Dolins, Sigma, Helena Strömberg, Yale Z. Wong, and MariAnne Karlsson. 2021. “Sharing Anxiety Is in the Driver's Seat: Analyzing User Acceptance of Dynamic Ridepooling and Its Implications for Shared Autonomous Mobility.” Sustainability 13 (14): 7828. https://doi.org/10.3390/su13147828.
  • Fisher, Erik, Michael O'Rourke, Robert Evans, Eric B. Kennedy, Michael E. Gorman, and Thomas P. Seager. 2015. “Mapping the Integrative Field: Taking Stock of Socio-Technical Collaborations.” Journal of Responsible Innovation 2 (1): 39–61. https://doi.org/10.1080/23299460.2014.1001671.
  • Fraedrich, Eva, and Barbara Lenz. 2016. “Societal and Individual Acceptance of Autonomous Driving.” In Autonomous Driving: Technical, Legal and Social Aspects, edited by Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner, 621–640. Berlin, Germany: Springer Berlin Heidelberg.
  • Gold, Christian, Moritz Körber, Christoph Hohenberger, David Lechner, and Klaus Bengler. 2015. “Trust in Automation – Before and After the Experience of Take-over Scenarios in a Highly Automated Vehicle.” Procedia Manufacturing 3:3025–3032. https://doi.org/10.1016/j.promfg.2015.07.847.
  • Graham, Connor, Mark Rouncefield, Martin Gibbs, Frank Vetere, and Keith Cheverst. 2007. “How Probes Work.” In Proceedings of the 2007 Conference of the Computer-Human Interaction Special Interest Group (CHISIG) of Australia on Computer-Human Interaction. Adelaide, Australia: Association for Computing Machinery.
  • Häußermann, Johann Jakob, Moritz J. Maier, Thea C. Kirsch, Simone Kaiser, and Martina Schraudner. 2023. “Social Acceptance of Green Hydrogen in Germany: Building Trust Through Responsible Innovation.” Energy, Sustainability and Society 13 (1): 22. https://doi.org/10.1186/s13705-023-00394-4.
  • Hanna, Josiah P., Michael Albert, Donna Chen, and Peter Stone. 2016. “Minimum Cost Matching for Autonomous Carsharing.” IFAC-PapersOnLine 49 (15): 254–259. https://doi.org/10.1016/j.ifacol.2016.07.757.
  • Hassenzahl, Marc. 2010. “Experience Design: Technology for All the Right Reasons.” In Synthesis Lectures on Human-Centered Informatics 8. San Rafael, CA: Morgan & Claypool Publishers.
  • Henschke, Adam. 2020. “Trust and Resilient Autonomous Driving Systems.” Ethics and Information Technology 22 (1): 81–92. https://doi.org/10.1007/s10676-019-09517-y.
  • Hergeth, Sebastian, Lutz Lorenz, Roman Vilimek, and Josef F. Krems. 2016. “Keep Your Scanners Peeled: Gaze Behavior As a Measure of Automation Trust During Highly Automated Driving.” Human Factors: The Journal of the Human Factors and Ergonomics Society 58 (3): 509–519. https://doi.org/10.1177/0018720815625744.
  • Hess, David J., Dasom Lee, Bianca Biebl, Martin Fränzle, Sebastian Lehnhoff, Himanshu Neema, Jürgen Niehaus, Alexander Pretschner, and Janos Sztipanovits. 2021. “A Comparative, Sociotechnical Design Perspective on Responsible Innovation: Multidisciplinary Research and Education on Digitized Energy and Automated Vehicles.” Journal of Responsible Innovation 8 (3): 421–444. https://doi.org/10.1080/23299460.2021.1975377.
  • Hoff, Kevin Anthony, and Masooda Bashir. 2015. “Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust.” Human Factors: The Journal of the Human Factors and Ergonomics Society 57 (3): 407–434. https://doi.org/10.1177/0018720814547570.
  • Hutchinson, Hilary, Wendy Mackay, Bosse Westerlund, Benjamin B. Bederson, Allison Druin, Catherine Plaisant, and Michel Beaudouin-Lafon, et al. 2003. “Technology Probes: Inspiring Design for and with Families.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 17–24. CHI 03: New Horizons. Ft. Lauderdale, Florida, USA.
  • Ind, Nicholas, and Nick Coates. 2013. “The Meanings of Co-creation.” European Business Review 25 (1): 86–95. https://doi.org/10.1108/09555341311287754.
  • Jansma, Sikke R., Anne M. Dijkstra, and Menno D. T. de Jong. 2022. “Co-Creation in Support of Responsible Research and Innovation: An Analysis of Three Stakeholder Workshops on Nanotechnology for Health.” Journal of Responsible Innovation 9 (1): 28–48. https://doi.org/10.1080/23299460.2021.1994195.
  • Jian, Jiun-Yin, Ann M. Bisantz, and Colin G. Drury. 2000. “Foundations for An Empirically Determined Scale of Trust in Automated Systems.” International Journal of Cognitive Ergonomics 4 (1): 53–71. https://doi.org/10.1207/S15327566IJCE0401_04.
  • Kling, Rob. 2007. “What Is Social Informatics and Why Does It Matter.” The Information Society 23 (4): 205–220. https://doi.org/10.1080/01972240701441556.
  • König, Alexandra, Christina Wirth, and Jan Grippenkoven. 2021. “Generation Y's Information Needs Concerning Sharing Rides in Autonomous Mobility on Demand Systems.” Sustainability 13 (14): 8095. https://doi.org/10.3390/su13148095.
  • Large, David R., Kyle Harrington, Gary Burnett, Jacob Luton, Peter Thomas, and Pete Bennett. 2019. “To Please in A Pod: Employing an Anthropomorphic Agent-Interlocutor to Enhance Trust and User Experience in an Autonomous, Self-Driving Vehicle.” In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications – AutomotiveUI '19, 49–59. Utrecht, Netherlands: ACM Press.
  • Leonard, Pauline, and Chira Tochia. 2022. “From Episteme to Techne: Crafting Responsible Innovation in Trustworthy Autonomous Systems Research Practice.” Journal of Responsible Technology 11:100035. https://doi.org/10.1016/j.jrt.2022.100035.
  • Leonardi, Paul M. 2012. “Materiality, Sociomateriality, and Socio-Technical Systems: What Do These Terms Mean? How Are They Different? Do We Need Them?” In Materiality and Organizing, edited by Paul M. Leonardi, Bonnie A. Nardi, and Jannis Kallinikos, 24–48. Oxford, UK: Oxford University Press.
  • Litman, Todd. 2021. “Autonomous Vehicle Implementation Predictions: Implications for Transport Planning.” Victoria, BC, Canada: Victoria Transport Policy Institute.
  • Lokhandwala, Mustafa, and Hua Cai. 2018. “Dynamic Ride Sharing Using Traditional Taxis and Shared Autonomous Taxis: A Case Study of NYC.” Transportation Research Part C: Emerging Technologies 97:45–60. https://doi.org/10.1016/j.trc.2018.10.007.
  • Lüders, Marika, Tor W. Andreassen, Simon Clatworthy, and Tore Hillestad. 2017. Innovating for Trust. Cheltenham, UK: Edward Elgar Publishing.
  • Mackay, Ana, Inˆns Fortes, Catarina Santos, Dário Machado, Patrícia Barbosa, Vera Vilas Boas, João Pedro Ferreira, et al. 2020. “The Impact of Autonomous Vehicles' Active Feedback on Trust.” In Advances in Safety Management and Human Factors, Vol. 969, edited by Pedro M. Arezes, 342–52. Cham: Springer International Publishing.
  • Mattelmäki, Tuuli. 2008. Design Probes. Helsinki: University of Art and Design.
  • Mayer, Roger C., James H. Davis, and F. David Schoorman. 1995. “An Integrative Model of Organizational Trust.” The Academy of Management Review 20 (3): 709. https://doi.org/10.2307/258792.
  • Merriam, Sharan B., and Elizabeth J. Tisdell. 2015. Qualitative Research: A Guide to Design and Implementation. 4th ed. The Jossey-Bass Higher and Adult Education Series. San Francisco, CA: John Wiley and Sons.
  • Muir, Bonnie M., and Neville Moray. 1996. “Trust in Automation. Part II. Experimental Studies of Trust and Human Intervention in a Process Control Simulation.” Ergonomics 39 (3): 429–460. https://doi.org/10.1080/00140139608964474.
  • Paddeu, Daniela, Graham Parkhurst, and Ian Shergold. 2020. “Passenger Comfort and Trust on First-Time Use of a Shared Autonomous Shuttle Vehicle.” Transportation Research Part C: Emerging Technologies 115:102604. https://doi.org/10.1016/j.trc.2020.02.026.
  • Parasuraman, Raja, and Christopher A. Miller. 2004. “Trust and Etiquette in High-Criticality Automated Systems.” Communications of the ACM 47 (4): 51. https://doi.org/10.1145/975817.975844.
  • Perkins, Lucy, Nicole Dupuis, and Brooks Rainwater. 2018. “Autonomous Vehicle Pilots Across America.” National League of Cities. https://www.nlc.org/resource/autonomous-vehicle-pilots-across-america/.
  • Pink, Sarah, Vaike Fors, Débora Lanzeni, Melisa Duque, Yolande Strengers, and Shanti Sumartojo. 2022. Design Ethnography: Research, Responsibility, and Futures. 1st ed. London New York, NY: Routledge, Taylor & Francis Group.
  • Pink, Sarah, Katalin Osz, Kaspar Raats, Thomas Lindgren, and Vaike Fors. 2020. “Design Anthropology for Emerging Technologies: Trust and Sharing in Autonomous Driving Futures.” Design Studies 69. https://doi.org/10.1016/j.destud.2020.04.002.
  • Raats, Kaspar, Vaike Fors, and Esbjörn Ebbesson. 2023. “Tailoring Co-Creation For Responsible Innovation: A Design Ethnographic Approach.” In 14th Scandinavian Conference on Information Systems, 1–15. Porvoo, Finland: AIS eLibrary.
  • Raats, Kaspar, Vaike Fors, and Sarah Pink. 2020. “Trusting Autonomous Vehicles: An Interdisciplinary Approach.” Transportation Research Interdisciplinary Perspectives 7:10. https://doi.org/10.1016/j.trip.2020.100201.
  • Roy, Alka. 2021. “The Responsible Innovation Framework: A Framework for Integrating Trust and Delight into Technology Innovation.” In Proceedings of the 54th Hawaii International Conference on System Sciences, 1040–1048. Virtual, USA: University of Hawaii.
  • Saariluoma, Pertti, Hannu Karvonen, and Rebekah Rousi. 2019. “Techno-Trust and Rational Trust in Technology – A Conceptual Investigation.” In Human Work Interaction Design. Designing Engaging Automation, IFIP Advances in Information and Communication Technology, Vol. 544, 283–93. Cham: Springer International Publishing.
  • Sato, Tetsuya, Yusuke Yamani, Molly Liechty, and Eric T. Chancey. 2019. “Automation Trust Increases Under High-Workload Multitasking Scenarios Involving Risk.” Cognition, Technology & Work 22:399–407. https://doi.org/10.1007/s10111-019-00580-5.
  • Schreier, Margrit. 2012. Qualitative Content Analysis in Practice. Los Angeles: SAGE.
  • Schulte, Britta F., Paul Marshall, and Anna L. Cox. 2016. “Homes For Life: A Design Fiction Probe.” In Proceedings of the 9th Nordic Conference on Human-Computer Interaction – NordiCHI '16, 1–10. Gothenburg, Sweden: ACM Press.
  • Sena, Çerçi, Marta E. Cecchinato, and John Vines. 2021. “How Design Researchers Interpret Probes: Understanding the Critical Intentions of a Designerly Approach to Research.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–15. Yokohama Japan: ACM.
  • Stilgoe, Jack, Richard Owen, and Phil Macnaghten. 2013. “Developing a Framework for Responsible Innovation.” Research Policy 42 (9): 1568–1580. https://doi.org/10.1016/j.respol.2013.05.008.
  • Taebi, B., A. Correljé, E. Cuppen, M. Dignum, and U. Pesch. 2014. “Responsible Innovation As An Endorsement of Public Values: The Need for Interdisciplinary Research.” Journal of Responsible Innovation 1 (1): 118–124. https://doi.org/10.1080/23299460.2014.882072.
  • Tenhundfeld, Nathan L., Ewart J. de Visser, Kerstin S. Haring, Anthony J. Ries, Victor S. Finomore, and Chad C. Tossell. 2019. “Calibrating Trust in Automation Through Familiarity With the Autoparking Feature of a Tesla Model X.” Journal of Cognitive Engineering and Decision Making 13 (4): 279–294. https://doi.org/10.1177/1555343419869083.
  • Valkenburg, Govert, Annapurna Mamidipudi, Poonam Pandey, and Wiebe E. Bijker. 2020. “Responsible Innovation As Empowering Ways of Knowing.” Journal of Responsible Innovation 7 (1): 6–25. https://doi.org/10.1080/23299460.2019.1647087.
  • Verberne, Frank M. F., Jaap Ham, and Cees J. H. Midden. 2012. “Trust in Smart Systems: Sharing Driving Goals and Giving Information to Increase Trustworthiness and Acceptability of Smart Systems in Cars.” Human Factors: The Journal of the Human Factors and Ergonomics Society 54 (5): 799–810. https://doi.org/10.1177/0018720812443825.
  • Walsham, G. 1995. “Interpretive Case Studies in IS Research: Nature and Method.” European Journal of Information Systems 4 (2): 74–81. https://doi.org/10.1057/ejis.1995.9.
  • Waytz, Adam, Joy Heafner, and Nicholas Epley. 2014. “The Mind in the Machine: Anthropomorphism Increases Trust in An Autonomous Vehicle.” Journal of Experimental Social Psychology 52:113–117. https://doi.org/10.1016/j.jesp.2014.01.005.
  • Webb, Helena, Marina Jirotka, Alan F. T. Winfield, and Katie Winkle. 2019. “Human-Robot Relationships and The Development of Responsible Social Robots.” In Proceedings of the Halfway to the Future Symposium 2019, 1–7. Nottingham, UK: ACM.
  • Wintersberger, Philipp, and Andreas Riener. 2016. “Trust in Technology As a Safety Aspect in Highly Automated Driving.” I-Com 15 (3): 297–310. https://doi.org/10.1515/icom-2016-0034.
  • Wintersberger, Philipp, Tamara von Sawitzky, Anna-Katharina Frison, and Andreas Riener. 2017. “Traffic Augmentation as a Means to Increase Trust in Automated Driving Systems.” In Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter – CHItaly '17, 1–7. Cagliari, Italy: ACM Press.