5,859
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework

ORCID Icon, , , & ORCID Icon
Pages 158-171 | Received 31 Aug 2021, Accepted 03 Apr 2023, Published online: 20 Apr 2023

References

  • Azevedo-Sa, H., Yang, X. J., Robert, L. P., & Tilbury, D. (2021). A unified bi-directional model for natural and artificial trust in human–robot collaboration. EEE Robotics and Automation Letters, 3(6), 5913–5920. https://doi.org/10.1109/LRA.2021.3088082
  • Banavar, G. (2016, November). What it will take for US to trust AI. Harvard Business Review. https://hbr.org/2016/11/what-it-will-take-for-us-to-trust-ai
  • Bayati, M., Braverman, M., Gillam, M., Mack, K. M., Ruiz, G., Smith, M. S., & Horvitz, E. (2014). Data-driven decisions for reducing readmissions for heart failure: General methodology and case study. Plos One, 9(10), e109264. https://doi.org/10.1371/journal.pone.0109264
  • Bosse, T., Jonker, C. M., Treur, J., & Tykhonov, D. (2007). Formal analysis of trust dynamics in human and software agent experiments. In M. Klusch, K. V. Hindriks, M. P. Papazoglou, & L. Sterling (Eds.), Cooperative information agents XI (pp. 343–359). Springer. https://doi.org/10.1007/978-3-540-75119-9_24
  • Bratman, M. (1987). Intention, plans, and practical reason. https://doi.org/10.2307/2185304
  • Breuer, C., Hüffmeier, J., Hibben, F., & Hertel, G. (2020). Trust in teams: A taxonomy of perceived trustworthiness factors and risk-taking behaviors in face-to-face and virtual teams. Human Relations, 73(1), 3–34. https://doi.org/10.1177/0018726718818721
  • Calhoun, C. S., Bobko, P., Gallimore, J. J., & Lyons, J. B. (2019). Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment. Journal of Trust Research, 9(1), 28–46. https://doi.org/10.1080/21515581.2019.1579730
  • Carter, N. T., Carter, D. R., & DeChurch, L. A. (2018). Implications of observability for the theory and measurement of emergent team phenomena. Journal of Management, 44(4), 1398–1425. https://doi.org/10.1177/0149206315609402
  • Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for HealthCare: Predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1721–1730. https://doi.org/10.1145/2783258.2788613
  • Castelfranchi, C. (1998). Modelling social action for AI agents. Artificial Intelligence, 103(1–2), 157–182. https://doi.org/10.1016/S0004-3702(98)00056-3
  • Castelfranchi, C., & Falcone, R. (2000). Trust is much more than subjective probability: Mental components and sources of trust. Proceedings of the 33rd Annual Hawaii International Conference on System Sciences, vol.1, 10. https://doi.org/10.1109/HICSS.2000.926815
  • Centeio Jorge, C., Mehrotra, S., Tielman, M., & Jonker, C. M. (2021). Trust should correspond to trustworthiness: A formalization of appropriate mutual trust in human-agent teams. 22nd International Trust Workshop Co-Located with AAMAS 2021. CEUR Workshop Proceedings.
  • Centeio Jorge, C., Tielman, M. L., & Jonker, C. M. (2022). Artificial trust as a tool in human-AI teams. Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction (pp. 1155–1157). IEEE.
  • Cole, M. S., Bedeian, A. G., Hirschfeld, R. R., & Vogel, B. (2011). Dispersion-composition models in multilevel research: A data-analytic framework. Organizational Research Methods, 14(4), 718–734. https://doi.org/10.1177/1094428110389078
  • Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and trust propensity: A meta-analytic test of their unique relationships with risk taking and job performance. The Journal of Applied Psychology, 92(4), 909–927. https://doi.org/10.1037/0021-9010.92.4.909
  • Cooke, N. J., Gorman, J. C., Myers, C. W., & Duran, J. L. (2013). Interactive team cognition. Cognitive Science, 37(2), 255–285. https://doi.org/10.1111/cogs.12009
  • Correia, F., Mascarenhas, S., Prada, R., Melo, F. S., & Paiva, A. (2018). Group-based emotions in teams of humans and robots. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 261–269. https://doi.org/10.1145/3171221.3171252
  • Costa, A. C. (2003). Work team trust and effectiveness. Personnel Review, 32(5), 605–622. https://doi.org/10.1108/00483480310488360
  • Costa, A. C., Fulmer, C. A., & Anderson, N. R. (2018). Trust in work teams: An integrative review, multilevel model, and future directions. Journal of Organizational Behavior, 39(2), 169–184. https://doi.org/10.1002/job.2213
  • Cummings, P., Mullins, R., Moquete, M., & Schurr, N. (2021). Hello world! I am charlie, an artificially intelligent conference panelist. 380.
  • De Jong, B. A., Dirks, K. T., & Gillespie, N. (2016). Trust and team performance: A meta-analysis of main effects, moderators, and covariates. The Journal of Applied Psychology, 101(8), 1134–1150. https://doi.org/10.1037/apl0000110
  • de Jong, B., Gillespie, N., Williamson, I., & Gill, C. (2021). Trust consensus within culturally diverse teams: A multistudy investigation. Journal of Management, 47(8), 2135–2168. https://doi.org/10.1177/0149206320943658
  • de Laat, P. B. (2016). Trusting the (Ro)botic other: By assumption? SIGCAS Computers and Society, 45(3), 255–260. https://doi.org/10.1145/2874239.2874275
  • Delice, F., Rousseau, M., & Feitosa, J. (2019). Advancing teams research: What, when, and how to measure team dynamics over time. Frontiers in Psychology, 10, 1324. https://doi.org/10.3389/fpsyg.2019.01324
  • de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in human–robot teams. International Journal of Social Robotics, 12(2), 459–478. https://doi.org/10.1007/s12369-019-00596-x
  • Falcone, R., Pezzulo, G., & Castelfranchi, C. (2003). A fuzzy approach to a belief-based trust computation. In R. Falcone, S. Barber, L. Korba, & M. Singh (Eds.), Trust, reputation, and security: Theories and practice (pp. 73–86). Springer. https://doi.org/10.1007/3-540-36609-1_7
  • Falcone, R., Piunti, M., Venanzi, M., & Castelfranchi, C. (2013). From manifesta to krypta: The relevance of categories for trusting others. ACM Transactions on Intelligent Systems and Technology, 4(2), 1–24. https://doi.org/10.1145/2438653.2438662
  • Fan, X., Liu, L., Zhang, R., Jing, Q., & Bi, J. (2021). Decentralized trust management: Risk analysis and trust aggregation. ACM Computing Surveys, 53(1), 1–33. https://doi.org/10.1145/3362168
  • Feitosa, J., Grossman, R., Kramer, W. S., & Salas, E. (2020). Measuring team trust: A critical and meta‐analytical review. Journal of Organizational Behavior, 41(5), 479–501. https://doi.org/10.1002/job.2436
  • Fulmer, C. A., & Gelfand, M. J. (2012). At what level (and in whom) we trust: Trust across multiple organizational levels. Journal of Management, 38(4), 1167–1230. https://doi.org/10.1177/0149206312439327
  • Fulmer, C. A., & Ostroff, C. (2021). Trust conceptualizations across levels of analysis. In N. Gillespie, A. C. Fulmer, & R. J. Lewicki (Eds.), Understanding trust in organizations (1st ed., pp. 14–41). Routledge.
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. The Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Grossman, R., & Feitosa, J. (2018). Team trust over time: Modeling reciprocal and contextual influences in action teams. Human Resource Management Review, 28(4), 395–410. https://doi.org/10.1016/j.hrmr.2017.03.006
  • Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105–120. https://doi.org/10.1016/j.techfore.2015.12.014
  • Herzig, A., Lorini, E., Hubner, J. F., & Vercouter, L. (2010). A logic of trust and reputation. Logic Journal of IGPL, 18(1), 214–244. https://doi.org/10.1093/jigpal/jzp077
  • Huang, L., Cooke, N. J., Gutzwiller, R. S., Berman, S., Chiou, E. K., Demir, M., & Zhang, W. (2021). Distributed dynamic team trust in human, artificial intelligence, and robot teaming. In C. S. Nam, E. P. Fitts, & J. B. Lyons (Eds.), Trust in human-robot interaction (pp. 301–319). Elsevier.
  • Hu, P., Lu, Y., & Gong, Y. (. (2021). Dual humanness and trust in conversational AI: A person-centered approach. Computers in Human Behavior, 119, 106727. https://doi.org/10.1016/j.chb.2021.106727
  • Jarvenpaa, S. L., Knoll, K., & Leidner, D. E. (1998). Is anybody out there? Antecedents of trust in global virtual teams. Journal of Management Information Systems, 14(4), 29–64. https://doi.org/10.1080/07421222.1998.11518185
  • Jessup, S. A., Schneider, T. R., Alarcon, G. M., Ryan, T. J., & Capiola, A. (2019). The measurement of the propensity to trust automation. In J. Y. C. Chen & G. Fragomeni (Eds.), Virtual, augmented and mixed reality. Applications and case studies (pp. 476–489). Springer International Publishing. https://doi.org/10.1007/978-3-030-21565-1_32
  • Jorge, C. C., Tielman, M. L., & Jonker, C. M. (2022). Assessing artificial trust in human-agent teams: A conceptual model. Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents, 1–3. https://doi.org/10.1145/3514197.3549696
  • Kaplan, A., & Haenlein, M. (2020). Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Business Horizons, 63(1), 37–50. https://doi.org/10.1016/j.bushor.2019.09.003
  • Kaplan, A., Kessler, T. T., Brill, J. C., & Hancock, P. A. (2021). Trust in artificial intelligence: Meta-analytic findings. Human Factors, 65(2), 00187208211013988. https://doi.org/10.1177/00187208211013988
  • Kiffin-Petersen, S. (2004). Trust: A neglected variable in team effectiveness research. Journal of Management & Organization, 10(1), 38–53. https://doi.org/10.1017/S1833367200004600
  • Korsgaard, M. A., Brower, H. H., & Lester, S. W. (2015). It isn’t always mutual: A critical review of dyadic trust. Journal of Management, 41(1), 47–70. https://doi.org/10.1177/0149206314547521
  • Kozlowski, S. W. J., Chao, G. T., Grand, J. A., Braun, M. T., & Kuljanin, G. (2013). Advancing multilevel research design: Capturing the dynamics of emergence. Organizational Research Methods, 16(4), 581–615. https://doi.org/10.1177/1094428113493119
  • Kozlowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest, 7(3), 77–124. https://doi.org/10.1111/j.1529-1006.2006.00030.x
  • Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50(1), 537–567. https://doi.org/10.1146/annurev.psych.50.1.537
  • Kuchenbrandt, D., Eyssel, F., Bobinger, S., & Neufeld, M. (2013). When a robot’s group membership matters. International Journal of Social Robotics, 5(3), 409–417. https://doi.org/10.1007/s12369-013-0197-8
  • Langer, M., König, C. J., Back, C., & Hemsing, V. (2022). Trust in artificial intelligence: Comparing trust processes between human and automated trustees in light of unfair bias. Journal of Business and Psychology, 1–16. https://doi.org/10.1007/s10869-022-09829-9
  • Larson, L., & DeChurch, L. (2020). Leading teams in the digital age: Four perspectives on technology and what they mean for leading teams. The Leadership Quarterly, 31(1), 1–18. https://doi.org/10.1016/j.leaqua.2019.101377
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50.30392
  • Leicht Deobald, U., Busch, T., Schank, C., Weibel, A., Schafheitle, S., Wildhaber, I., & Kasper, G. (2019). The challenges of algorithm-based HR decision-making for personal integrity. Journal of Business Ethics, 160(2), 377–392. https://doi.org/10.1007/s10551-019-04204-w
  • Liao, P. -H., Hsu, P. -T., Chu, W., & Chu, W. -C. (2015). Applying artificial intelligence technology to support decision-making in nursing: A case study in Taiwan. Health Informatics Journal, 21(2), 137–148. https://doi.org/10.1177/1460458213509806
  • Lim, B., & Klein, K. J. (2006). Team mental models and team performance: A field study of the effects of team mental model similarity and accuracy. Journal of Organizational Behavior: The International Journal of Industrial, Occupational and Organizational Psychology and Behavior, 27(4), 403–418. https://doi.org/10.1002/job.387
  • Lynn, T., van der Werff, L., & Fox, G. (2021). Understanding trust and cloud computing: An integrated framework for assurance and accountability in the cloud. In T. Lynn, J. G. Mooney, L. van der Werff, & G. Fox (Eds.), Data privacy and trust in cloud computing: Building trust in the cloud through assurance and accountability (pp. 1–20). Springer International Publishing. https://doi.org/10.1007/978-3-030-54660-1_1
  • Mathieu, J. E., Maynard, M. T., Rapp, T., & Gilson, L. (2008). Team effectiveness 1997-2007: A review of recent advancements and a glimpse into the future. Journal of Management, 34(3), 410–476. https://doi.org/10.1177/0149206308316061
  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734. https://doi.org/10.2307/258792
  • McAllister, D. J. (1995). Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38(1), 24–59. https://doi.org/10.2307/256727
  • Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS), 2(2), 12. https://doi.org/10.1145/1985347.1985353
  • McNeese, N. J., Demir, M., Chiou, E., Cooke, N., & Yanikian, G. (2019). Understanding the role of trust in human-autonomy teaming. Proceedings of the 52nd Hawaii international conference on system sciences. IEEE.
  • Mirowska, A. (2020). AI evaluation in selection: Effects on application and pursuit intentions. Journal of Personnel Psychology, 19(3), 142–149. https://doi.org/10.1027/1866-5888/a000258
  • Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. The Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
  • O’Neill, T., McNeese, N., Barron, A., & Schelble, B. (2022). Human–autonomy teaming: A review and analysis of the empirical literature. Human Factors, 64(5), 904–938. https://doi.org/10.1177/0018720820960865
  • Onnasch, L., & Roesler, E. (2021). A taxonomy to structure and analyze human–robot interaction. International Journal of Social Robotics, 13(4), 833–849. https://doi.org/10.1007/s12369-020-00666-5
  • Page, M. J., Moher, D., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … McKenzie, J. E. (2021). PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. The BMJ, 372, n160. https://doi.org/10.1136/bmj.n160
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J. -Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. The Journal of Applied Psychology, 88(5), 879–903. https://doi.org/10.1037/0021-9010.88.5.879
  • Rao, A. S., & Georgeff, M. P. (1995). BDI agents: From theory to practice. Proceedings of the First International Conference on Multi-Agent Systems (ICMAS-95) (pp. 312–319). ICMAS.
  • Rich, C., & Sidner, C. L. (1997). COLLAGEN: When agents collaborate with people. Proceedings of the First International Conference on Autonomous Agents - AGENTS ’97, 284–291. https://doi.org/10.1145/267658.267730
  • Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach (Third edition, Global ed.). Pearson.
  • Sabater-Mir, J., & Vercouter, L. (2013). Trust and reputation in multiagent systems. In G. Weiss (Ed.), Multiagent systems (pp. 381–419). MIT Press.
  • Salas, E., Sims, D. E., & Burke, C. S. (2005). Is there a “big five” in teamwork? Small Group Research, 36(5), 555–599. https://doi.org/10.1177/1046496405277134
  • Salmon, P. M., Read, G. J. M., Walker, G. H., Stevens, N. J., Hulme, A., McLean, S., & Stanton, N. A. (2022). Methodological issues in systems human factors and ergonomics: Perspectives on the research–practice gap, reliability and validity, and prediction. Human Factors and Ergonomics in Manufacturing & Service Industries, 32(1), 6–19. https://doi.org/10.1002/hfm.20873
  • Savela, N., Kaakinen, M., Ellonen, N., & Oksanen, A. (2021). Sharing a work team with robots: The negative effect of robot co-workers on in-group identification with the work team. Computers in Human Behavior, 115, 106585. https://doi.org/10.1016/j.chb.2020.106585
  • Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377–400. https://doi.org/10.1177/0018720816634228
  • Schelble, B. G., Lopez, J., Textor, C., Zhang, R., McNeese, N. J., Pak, R., & Freeman, G. (2022). Towards ethical AI: Empirically investigating dimensions of AI ethics, trust repair, and performance in human-AI teaming. Human Factors: The Journal of the Human Factors and Ergonomics Society, 001872082211169. https://doi.org/10.1177/00187208221116952
  • Schlicker, N., Langer, M., Ötting, S. K., Baum, K., König, C. J., & Wallach, D. (2021). What to expect from opening up ‘black boxes’? Comparing perceptions of justice between human and automated agents. Computers in Human Behavior, 122, 106837. https://doi.org/10.1016/j.chb.2021.106837
  • Schoorman, F. D., Mayer, R. C., & Davis, J. H. (2007). An integrative model of organizational trust: Past, present, and future. Academy of Management Briarcliff Manor.
  • Seeber, I., Waizenegger, L., Seidel, S., Morana, S., Benbasat, I., & Lowry, P. B. (2020). Collaborating with technology-based autonomous agents: Issues and research opportunities. Internet Research, 30(1), 1–18. https://doi.org/10.1108/INTR-12-2019-0503
  • Shamir, B., & Lapidot, Y. (2003). Trust in organizational superiors: Systemic and collective considerations. Organization Studies, 24(3), 463–491. https://doi.org/10.1177/0170840603024003912
  • Sheridan, T. B. (2019). Extending three existing models to analysis of trust in automation: Signal detection, statistical parameter estimation, and model-based control. Human Factors: The Journal of the Human Factors and Ergonomics Society, 61(7), 1162–1170. https://doi.org/10.1177/0018720819829951
  • Smith, P. J., & Hoffman, R. R. (Eds.). (2017). Cognitive systems engineering: The future for a changing world (1st ed.). CRC Press. https://doi.org/10.1201/9781315572529
  • Solberg, E., Kaarstad, M., Eitrheim, M. H. R., Bisio, R., Reegård, K., & Bloch, M. (2022). A conceptual model of trust, perceived risk, and reliance on AI decision aids. Group & Organization Management, 47(2), 187–222. https://doi.org/10.1177/10596011221081238
  • Steain, A., Stanton, C. J., & Stevens, C. J. (2019). The black sheep effect: The case of the deviant ingroup robot. Plos One, 14(10), e0222975. https://doi.org/10.1371/journal.pone.0222975
  • Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G., & van Moorsel, A. (2020). The relationship between trust in AI and trustworthy machine learning technologies. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 272–283. https://doi.org/10.1145/3351095.3372834
  • Turner, J. C., Hogg, M. A., Oakes, P. J., Reicher, S. D., & Wetherell, M. S. (1987). Rediscovering the social group: A self-categorization theory. Basil Blackwell.
  • Ulfert, A. -S., & Georganta, E. (2020). A model of team trust in human-agent teams. Companion Publication of the 2020 International Conference on Multimodal Interaction, 171–176. https://doi.org/10.1145/3395035.3425959
  • van den Bosch, K., Schoonderwoerd, T., Blankendaal, R., & Neerincx, M. (2019). Six challenges for human-AI Co-learning. In R. Sottilare, & J. Schwarz (Eds.), HCII 2019. Lecture Notes in Computer Science (Vol. 11597, pp. 572–589). Springer. https://doi.org/10.1007/978-3-030-22341-0_45
  • van der Werff, L., Legood, A., Buckley, F., Weibel, A., & de Cremer, D. (2019). Trust motivation: The self-regulatory processes underlying trust decisions. Organizational Psychology Review, 9(2–3), 99–123. https://doi.org/10.1177/2041386619873616
  • van Wissen, A., Gal, Y., Kamphorst, B. A., & Dignum, M. V. (2012). Human-agent teamwork in dynamic environments. Computers in Human Behavior, 28(1), 23–33. https://doi.org/10.1016/j.chb.2011.08.006
  • Webber, S. S. (2002). Leadership and trust facilitating cross‐functional team success. Journal of Management Development, 21(3), 201–214. https://doi.org/10.1108/02621710210420273
  • Webber, S. S. (2008). Development of cognitive and affective trust in teams: A longitudinal study. Small Group Research, 39(6), 746–769. https://doi.org/10.1177/1046496408323569
  • Zhong, Y., Bhargava, B., Lu, Y., & Angin, P. (2015). A computational dynamic trust model for user authorization. IEEE Transactions on Dependable and Secure Computing, 12(1), 1–15. https://doi.org/10.1109/TDSC.2014.2309126