5,823
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework

ORCID Icon, , , & ORCID Icon
Pages 158-171 | Received 31 Aug 2021, Accepted 03 Apr 2023, Published online: 20 Apr 2023
 

ABSTRACT

Intelligent systems are increasingly entering the workplace, gradually moving away from technologies supporting work processes to artificially intelligent (AI) agents becoming team members. Therefore, a deep understanding of effective human-AI collaboration within the team context is required. Both psychology and computer science literature emphasize the importance of trust when humans interact either with human team members or AI agents. However, empirical work and theoretical models that combine these research fields and define team trust in human-AI teams are scarce. Furthermore, they often lack to integrate central aspects, such as the multilevel nature of team trust and the role of AI agents as team members. Building on an integration of current literature on trust in human-AI teaming across different research fields, we propose a multidisciplinary framework of team trust in human-AI teams. The framework highlights different trust relationships that exist within human-AI teams and acknowledges the multilevel nature of team trust. We discuss the framework’s potential for human-AI teaming research and for the design and implementation of trustworthy AI team members.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Notes

1. In computer science, an environment refers to everything that surrounds the AI agent, but is not part of the agent itself (Russell & Norvig, 2016). Thus, for the AI agent, the environment includes other team members.

2. Trustworthiness is defined by the level of ability, integrity, and benevolence of a team member (Mayer et al., 1995). This definition has been previously applied to trustworthiness of humans as well as of agents (Langer et al., 2022).

3. Robotic systems that do not entail AI components (e.g., utilizing learning algorithms) were excluded based on exclusion criterium (a).

4. Our search was restricted to peer-reviewed publications between 2000 and 2021.

5. With BDI following the Bratman’s theory of rational actions in humans (Bratman, 1987), AI agents can reason in abstract concepts (e.g., a belief) and also showcase their trustworthiness to other team members.

6. The detailed dynamics and formation of these beliefs have been discussed in computer science literature (see e.g., Bosse et al., 2007; Herzig et al., 2010) but are out of the scope of this paper.

Additional information

Notes on contributors

Anna-Sophie Ulfert

Anna-Sophie Ulfert Conceptualization, Methodology, Formalization, Visualization, Writing - Original Draft, Writing - Review & Editing

Eleni Georganta

Eleni Georganta Conceptualization, Writing - Original Draft, Writing - Review & Editing

Carolina Centeio Jorge

Carolina Centeio Jorge Conceptualization, Methodology, Formalization, Writing - Original Draft, Visualization

Siddharth Mehrotra

Siddarth Mehrotra: Conceptualization, Writing - Original Draft

Myrthe Tielman

Myrthe Tielman Conceptualization