981
Views
0
CrossRef citations to date
0
Altmetric
Review Article

Collaborative Intelligence: A Scoping Review Of Current Applications

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Article: 2327890 | Received 15 Mar 2023, Accepted 29 Feb 2024, Published online: 17 Mar 2024

ABSTRACT

This review provides a novel examination of the emerging field of collaborative intelligence and demonstrates the value that human-AI teams can deliver. Humans and artificial intelligence (AI) systems have complementary strengths. This complementarity creates the potential to achieve a step-change in performance by combining inputs from human and AI on a common task. We introduce the construct of “collaborative intelligence” and develop a set of criteria, for evaluating whether an AI system enables collaborative intelligence. Applications utilizing collaborative intelligence had to have (1) complementarity (i.e. the collaboration draws upon complementary human and AI capability to improve outcomes), (2) a shared objective and outcome, and (3) sustained, two-way task-related interaction between human and AI. A systematic review of 1,250 AI applications published between 2012 and 2021 was carried out to investigate whether real-world examples of “collaborative intelligence” could be identified. The review yielded 16 AI systems which met the criteria, demonstrating that collaboration between humans and AI systems is possible and that these systems offer a wide range of performance benefits including efficiency, quality, creativity, safety, and human enjoyment.

Introduction

Industry 4.0 is underway with organizations adopting automated systems, building their internet of things and using big data, smart systems, and cyber-physical systems to expand their capabilities and improve productivity (Mason, Ayre, and Burns Citation2022; Schuh et al. Citation2020; Szász et al. Citation2021; Veile et al. Citation2020). The next wave of innovation, known as Industry 5.0, focuses on human-centric technology development and adoption (De et al. Citation2021; Maddikunta et al. Citation2022). Rather than simply automating tasks that were previously performed by humans, there is greater focus on using collaboration between humans and smart machines to augment the capability of humans (Sindhwani et al. Citation2022).

The term collaborative intelligence refers to collaborative human-AI systems that leverage the different attributes and strengths of each agent to achieve further improvements in work outcomes (Billman et al. Citation2006; Daugherty and Wilson Citation2018; Jarrahi Citation2018; Seeber et al. Citation2020). Organizations investing in AI-human collaboration are expected to boost revenues by 38% within five years (Shook and Knickrehm Citation2018). While the literature provides a strong rationale for utilizing collaborative intelligence (Dellermann et al. Citation2019; Madni and Madni Citation2018; Traumer, Oeste-Reiß, and Leimeister Citation2017), the construct has not been tested through systematic empirical research. A systematic review of AI applications is needed to determine whether any collaborative intelligence applications have been developed and, if so, how they are being used and what benefits they provide. In this paper, we develop a set of criteria for identifying applications that utilize collaborative intelligence and use them to systematically analyze potential collaborative intelligence applications reported in academic and gray literature. Our goal is to test whether collaborative intelligence exists as more than a theoretical construct and, if so, to describe the types of collaborative intelligence that are currently technologically and economically feasible.

Why Collaborative Intelligence?

Examples of more collaborative human-AI interactions (Epstein Citation2015; Kolbeinsson, Lagerstedt, and Lindblom Citation2019; Poser and Bittner Citation2020; Seeber et al. Citation2020) are described using a range of terms including human-robot interaction (Cesta, Orlandini, and Umbrico Citation2018), human-robot teams (Wolf and Stock-Homburg Citation2023), human/machine in the loop (Ostheimer, Chowdhury, and Iqbal Citation2021), hybrid intelligence (Akata et al. Citation2020; Ostheimer, Chowdhury, and Iqbal Citation2021) and collective intelligence (Dellermann et al. Citation2019). There is currently little consensus across the literature regarding how to differentiate the range of ways in which humans and AI can collaborate (Wolf and Stock-Homburg Citation2023). In this study we use descriptions of collaborative intelligence suggested by other researchers to draw out three defining characteristics of collaborative intelligence. First, the collaboration involves a sequence of shared actions between human and AI agents toward a shared objective (Cienki Citation2015; Kolbeinsson, Lagerstedt, and Lindblom Citation2019; Wang et al. Citation2020). Second, to enable this level of interaction the AI agents must have the ability to share and respond to information about the task and adapt to changes in the state of the human agent and the task (Kolbeinsson, Lagerstedt, and Lindblom Citation2019; Wang et al. Citation2020). Finally, the collaboration between the human and AI agents improves the performance, novelty, productivity, or quality of work above what could be done individually (Dellermann et al. Citation2019; Madni and Madni Citation2018). Together, these descriptions provide three criteria for identifying AI systems that enable collaborative intelligence:

  • Complementarity:The goal of the interaction between human and AI agents is to leverage their unique strengths to achieve improved outcomes. This excludes human-AI interactions that use the human to teach the AI so that in the long run the AI can perform the task independently. It also excludes applications that are designed to probe or test the dynamics of collaboration or teamwork rather than to complete a task (Yan, Wang, & Gerada Citation2021).

  • Shared objective: The human and AI agents are focused on the same objective and the activities of the human and AI agents are integrated and indivisible in the final output that is produced (Dellermann et al. Citation2019; Dubey et al. Citation2020; Johnson et al. Citation2014). The workflow must go beyond a simple division of labor or a transactional relationship.

  • Sustained interaction: Interaction between the human and AI agents must extend beyond a static interaction such as a single question/answer dynamic (Traumer, Oeste-Reiß, and Leimeister Citation2017). Reciprocal communication which enables each agent to understand changes in the state of the objective or the other agent and respond adaptively is critical for all collaborations and is a key feature of collaborative intelligence (Madni and Madni Citation2018; McDermott et al. Citation2018).

The motivation for developing collaborative intelligence applications (as opposed to AI applications with collaborative capability) has two sources: the potential for improved task performance and improved work satisfaction. Whilst the capability of AI has been improving rapidly, there are still many tasks that AI cannot perform despite these being simple tasks for a human. The strength of AI lies in its computational power and its ability to process very large amounts of data, recognize patterns and evaluate alternative decision options (Ajay Agrawal, Gans, and Goldfarb Citation2019; Jarrahi Citation2018). However, AI struggles to understand common-sense situations (Jarrahi Citation2018), make intuitive decisions or judgments based on indescribable factors (Ajay Agrawal, Gans, and Goldfarb Citation2019; Goldfarb and Lindsay Citation2020; Jarrahi Citation2018) and respond to novel situations (Jarrahi Citation2018) – tasks that a human can perform well on. These complementary strengths suggest that there are likely to be many fields in which performance can be optimized by using a combination of human intelligence and AI (De Luca Citation2021).

The potential of collaborative intelligence is demonstrated in the evolving use of human-computer chess teams that combine human intuition and computational power (Kasparov Citation2010). When IBM’s Deep Blue program defeated world chess champion Gary Kasparov in 1997, the ability to process hundreds of millions of moves per second outperformed human creativity and imagination (Kasparov Citation2010). Less than a decade later, in 2005, in a chess tournament of human-computer teams, two amateur players won against teams of grandmasters and the most powerful computer programs. The amateurs developed a superior process to leverage the most value from their computers demonstrating “weak human + machine + better process was superior to a strong computer alone and … superior to a strong human + machine + inferior process” (Kasparov Citation2010; Thompson Citation2013).

The second argument for utilizing collaborative intelligence lies in the potential to improve the quality and scope of work for humans. Some types of work are inherently more motivating than others (Hackman and Oldham Citation1975). By automating functions that are less motivating for humans but allowing the human to add value by performing more rewarding tasks we can improve the quality of work. Typically, AI systems are implemented and designed to increase productivity and accuracy of tasks (Birhane et al. Citation2022). However the sociotechnical nature of collaborative AI allows us to extend this capability and consider improving worker satisfaction (Sartori and Theodorou Citation2022). While it is not within the scope of this paper to discuss the optimal design of collaborative work involving both humans and AI, this important topic is already being explored (Parker & Grote). Current research suggests that the experience of using AI can affect a human worker’s experience of predictability, controllability, meaningfulness, and fairness (Battina Citation2018; Langer and Landers Citation2021; Oh et al. Citation2018; Parker & Grote). In addition, by allowing human capability to be augmented by AI capability, there is potential to address skills gaps within the existing workforce or increase the pool of workers who can perform a given role.

Research Objectives and Contributions

Our goal through this research is to evaluate the proposition that complementary facets of human and artificial intelligence can be combined to achieve a step-change improvement in performance. To this end, we developed a set of criteria for evaluating whether or not “collaborative intelligence” is embodied in an AI system. We then carried out a systematic review of AI applications published between x and y to determine whether these criteria were met. Having identified examples of AI systems that met our criteria, we then provided a description of these first examples of collaborative intelligence, describing (1) what types of tasks they perform, (2) the roles of the human and the AI in the collaboration, (3) the mechanisms through which the human and the AI interact and (4) the improved outcomes achieved from the collaboration. The review illustrates that collaboration between humans and artificial intelligence on a shared goal is not only possible but that it has the potential to support positive outcomes across a wide range of measures, from creativity, to safety, to productivity.

Methodology

A systematic methodology was adopted for the literature search, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines (Arksey and O’Malley Citation2005; Liberati et al. Citation2009). Scopus, ProQuest, Web of Science, and IEEE Xplore databases were used for the literature search because they contain a mix of academic and gray literature sources across a broad range of topics. Additional directed internet searches were carried out using the Google search engine in private browser mode to avoid the impact of cookies and previous searches on the returned results. In addition, where secondary sources of potential applications were revealed during the review of a full-text article, the secondary reference was also reviewed.

Keywords were identified and selected through an iterative approach including pilot searches and reviewing key documents. These keywords were “human” and “artificial intelligence collaboration;” “hybrid intelligence” and “artificial intelligence;” “collective intelligence” and “artificial intelligence;” “human computer collaboration” and “artificial intelligence;” “hybrid teamwork” and “artificial intelligence;” “cobot” and “artificial intelligence;” “human machine collaboration” and “artificial intelligence;” “work” and “artificial intelligence.” Proximity and wildcard syntax were used where databases allowed, along with commonly used acronyms and synonyms for artificial intelligence (e.g., AI, machine learning, ML). The results of the searches were limited to English language documents published between January 1 2012 and 31 December 2021.

The keywords were simplified for the Google search protocol to accommodate the broader range of potentially relevant documents and to avoid being overly restrictive or returning results that replicated those done in the database search. The following key terms were used: human machine collaboration artificial intelligence machine learning; hybrid intelligence; collective intelligence artificial intelligence machine learning; human computer collaboration; hybrid teamwork artificial intelligence machine learning; cobot artificial intelligence machine learning; human artificial intelligence machine learning collaboration. Restrictive limiters such as proximity searches and quotation marks were also removed. The Google relevancy ranking was relied on to identify the most relevant documents, and the first 30 items returned of each of the seven targeted google searches were analyzed for applications of collaborative intelligence.

Criteria

Literature Review

The database search returned 1,250 documents. The titles and executive summaries/abstracts (where available) of these documents were initially screened to identify those that were eligible for inclusion for full-text review. To be included there needed to be an indication or reference to an AI application that involved tasks completed by both AI and human. Of the 1,250 documents, 335 were eliminated as duplicate documents and a further 445 were excluded after the initial screening process. The remaining 470 documents were assessed for inclusion in the systematic review through a detailed full-text review. Each document was assessed against the following eligibility requirements:

  • Complementarity:The collaboration between the human and AI agents improves the performance, job satisfaction, novelty or productivity, of work. The outcome of the collaborative task in a collaborative intelligence application achieves a better result than either the human or the AI would alone.

  • Shared objective and output: The human and AI agents are focused on the same objective and the final output represents an integration of their individual contributions.

  • Sustained period of interaction: Interaction between the human and artificial intelligence must occur over time, rather than via a singular or static interaction.

From an analysis of the full texts, additional records were excluded because either (1) the full-text article could not be sourced after extensive searching (n = 3), or (2) the document did not meet one or more of the inclusion criteria (n = 448). A total of 451 documents were eliminated from further analysis based on these full-text exclusion criteria. The remaining 19 documents detailing 16 collaborative intelligence applications (3 collaborative intelligence applications were detailed in two unique documents) were included in the systematic review for further analysis (see ).

Figure 1. Results of the literature search.

Figure 1. Results of the literature search.

Results

An extensive search (encompassing 1,250 documents from the academic and gray literature) revealed 16 examples of AI applications that met the criteria for collaborative intelligence (see ). Analysis of these applications revealed that they were designed to perform various types of work. We identified five types of collaborative intelligence applications: creative agents, industrial agents, healthcare agents, emergency services agents and knowledge work agents.

Table 1. Collaborative intelligence applications identified from the review.

Characteristics of Collaborative Intelligence Applications

The second objective of this review was to profile the current state of collaborative intelligence applications by describing their characteristics.

Stage of Development

Although our analysis covered documents published between 2012 and 2021, all of the 16 collaborative intelligence applications included in the analysis were documented after 2017, indicating the emerging nature of this field. Three of the collaborative intelligence applications had been released for use publicly, namely, Flow Machines Professional (Avdeeff Citation2019), Shelley (O’Brien Citation2017; Yanardag, Cebrian, and Rahwan Citation2021) and Bionicworkplace (Festo Citation2018; Kärcher et al. Citation2017) (see ). Bionicworkplace was the only identified collaborative intelligence application that was commercially available. Bionicworkplace, developed by Festo, is a complex and adaptable cyber-physical system comprised of various sensors, tools, and abilities to provide a flexible, collaborative workstation. The adaptability of the cobot is especially beneficial for increasing productivity in the development of short-runs of customized items (Kärcher et al. Citation2017).

The other two publicly available collaborative intelligence applications were offered as free digital products. Flow Machines, developed by Sony (Pachet, Roy, and Carré Citation2021), enables collaboration between humans and AI to enhance and inspire creativity during music production (Pachet, Roy, and Carré Citation2021). Shelley, a TwitterBot developed by a group of researchers to probe the success of human-AI collaborative fictional horror story development, has been deployed for public use on the Twitter platform, resulting in over 500 collaborative narratives. Shelley was designed to enhance the emotional impact of human stories, introducing novel and surprising directions to the narrative (Yanardag, Cebrian, and Rahwan Citation2021). The collaborative stories between Shelley and humans were found to induce greater negative effect and state anxiety than those created by Shelley alone, indicating the success of collaborative creation (Yanardag, Cebrian, and Rahwan Citation2021). There is no indication that it has been used to generate any financial profit for the developers (Yanardag, Cebrian, and Rahwan Citation2021). The remaining 13 collaborative intelligence applications were defined as prototypes and proof of concept designs, developed in universities and research organizations. Some of these applications appeared to be undergoing further development toward bringing the application to market for public use, for example DroneResponse, AMAR-6 and Forsense (see ).

Collaboration Channels

The applications that we identified were evenly divided in terms of whether the collaboration between the human and AI occurred in a virtual or physical environment. The virtual collaborations occurred through a graphical user interface (GUI) (n = 7) and a virtual reality headset (n = 1). Forsense (see ) is an example of collaborative intelligence that occurs in a virtual environment. It was developed for people carrying out exploratory research online (e.g. collating, organizing and making sense of information) (Rachatasumrit et al. Citation2021). The system provides collaborative support to human users allowing them to accelerate, improve and coordinate their search tasks. The collaboration is enabled through a web browser extension. The GUIs used in the identified collaborative applications have an interface that enables the human and the AI to provide feedback and respond to one another, although the human is the arbiter in decision-making (with the exception of Shelley (O’Brien Citation2017; Yanardag, Cebrian, and Rahwan Citation2021)).

The collaborations involving cyber-physical systems used robots (or cobots) or drones. The human and the cobot collaborated through a virtual communication channel using a GUI, sensors or wearable devices. DroneResponse (Agrawal, Cleland-Huang, and Steghöfer Citation2020) is an example of a cyber-physical system where communication occurs through a GUI. The DroneResponse prototype was developed in 2020 by a group of academics from the United States using semi-autonomous UAVs that collaborate with human agents through GUIs to provide faster, more successful and safer emergency responses than human or UAVs could alone (Agrawal, Cleland-Huang, and Steghöfer Citation2020). In this application, the GUI enabled bi-directional communication around mission plans and situational changes between the human and UAV rescue teams. ARMAR-6 (Asfour et al. Citation2018) was another example of a GUI-based cyber-physical cobot, which collaborated with human technicians on maintenance tasks. Communication with the human agent occurred through gestures, voice commands and various sensors used by the cobot to detect changes in the state of a task or their human collaborator.

In terms of the way that information is shared between the human and AI agents during collaboration, the SAGE application (Goldberg, Belyaev, and Sluchak Citation2021) presents a unique design choice. It is a patient management system designed to collaborate with human medical practitioners to improve diagnostics and patient treatment and care. The designers of SAGE sought to develop a system that communicates with human agents in an explainable way to build trust and ultimately better collaborative outcomes. Non-collaborative patient management systems will simply provide a diagnosis, but SAGE collaborates with the practitioner by probing data collected from the patient during the course of treatment to identify and communicate any indicators that are not consistent with the practitioners’ original diagnosis or prognosis. SAGE uses a high-level and intuitive visual interface to communicate any issues of concern but the practitioners can interrogate SAGE to understand the patient indicators that underpin the issues that SAGE identifies. In this way, SAGE can reduce the impact of the human practitioners’ biases or limited attention and improve decision confidence (Goldberg, Belyaev, and Sluchak Citation2021). The explainability and transparency of the system’s decision making was designed to address socio-technological barriers such as trust between humans and machines that affect the quality of collaborative decision making (Goldberg, Belyaev, and Sluchak Citation2021).

Types of Benefits/Outcomes Sought from the Collaboration

Enhanced productivity or accuracy was a common objective for the collaborative intelligence applications that we identified. Improving human worker satisfaction and safety was another motivation for collaborative intelligence applications. The third type of outcome sought was creativity.

An example of an application that was designed to increase productivity is HALS (Diao, Chen, and Kvedar Citation2021; van der Wal et al. Citation2021). HALS is a collaborative human-AI labeling workflow that assists human pathologists with cellular annotation of pathological cell types in tissue samples. The system is designed to enable accurate annotation of large data sets that were previously cost prohibitive. The collaborative use of HALS by a group of seven expert pathologists found a manual work reduction of 91% along with a boost in data quality of 4.34% (van der Wal et al. Citation2021).

There were several collaborative intelligence applications that were designed to improve human safety, either for a human client or for a human worker. Bionicworkspace (a cobot used in collaborative manufacturing and production tasks) is designed to reduce physical strain on human workers as well as improving their productivity (Kärcher et al. Citation2017). On the other hand, SAGE (Goldberg, Belyaev, and Sluchak Citation2021) and DroneResponse (Agrawal, Cleland-Huang, and Steghöfer Citation2020) improve safety by supporting accurate patient diagnosis and management (SAGE) and providing faster search and response for people in need of rescue (DroneResponse).

Evolver (Feldman Citation2017) is example of a collaborative intelligence application designed to enhance human creativity during the production of artworks and graphic designs. Evolver collaboratively produces generative graphic design artifacts based on constraints controlled by a human graphic designer in an iterative design process enabled through a software program. A group of 10 designers went on to test the application and how it impacted their outputs and the creative process. It was found that the collaboration provided accessibility to alternative design solutions, helping designers to step out of their current frame of reference to assist with the ideation process (Feldman Citation2017). Although the designers largely reported positive experiences associated with collaborating with Evolver, the issue of authorship was raised by several participants. This is likely to be an issue that arises with a number of co-creative collaborative agents and could be a challenge to the adoption of these technologies into professional creative practices.

Discussion

Our systematic review of the literature yielded 16 examples of collaborative intelligence applications in which human and AI agents work collaboratively to achieve a shared outcome that achieves more than either agent could achieve on their own. Of these examples, almost all were in early/prototype stages of development rather than commercially available products. There was a mix of embodied cyber-physical and virtual software systems and the channels and format of the communication between the human and the AI also took several forms, ranging from sensors and computer vision to natural language processing and GUI key commands. The applications were designed for a variety of fields including healthcare, manufacturing, graphic design, emergency services and creative writing.

Technological Feasibility of Collaborative Intelligence

The range of applications that we identified illustrate that collaboration between humans and AI is technologically feasible across a range of domains and toward multiple ends. Furthermore, the applications delivered a range of benefits. Working with creative collaborative intelligence applications improved both efficiency and creativity. Manufacturing and assembly collaborative intelligence applications improved efficiency and health and safety. Knowledge work applications improved the accuracy and coverage of the decisions and classifications that were produced. We infer that it is technologically feasible to combine a variety of human and AI capabilities and thereby achieve benefits in terms of efficiency, quality, creativity, safety, and human enjoyment.

Some researchers argue that because AI is designed to support humans and should give humans control in the interaction, AI is more appropriately described as a tool or knowledge artifact than as a collaborator or agent (Cabitza, Campagner, and Simone Citation2021; Shneiderman Citation2020). In our view, the essential aspect of collaboration is not the equality between the actors in the collaboration but rather, the fact that they each contribute to a shared output. For example, the first author of a research paper might define the scope of the paper and make decisions about how to incorporate the input of each author in the final manuscript. Nevertheless, the authors of the paper are considered to be collaborators by virtue of their contributions to that joint output. Furthermore, it is important to be able to differentiate between AI systems that are designed to perform a discrete task (serving as a tool for that task) and AI systems that can contribute to a task over time, while responding to and sharing information about that task with a human actor. So while the applications identified in this review do not demonstrate agency in the form of awareness or independent initiative they are designed to contribute to an output using collaborative processes. The socio-technical implications of AI tools, AI knowledge producers and AI collaborators are likely to be very different and they should be differentiated through our terminology.

Addressing Large Language Models

Since conducting our systematic review, Large Language Models (LLMs) such as ChatGPT and DALL-E have entered the market and achieved an unprecedented rate of adoption (Hart et al. Citation2023). While a review of the growing range of LLMs available in the market is not within scope, we would be remiss not to consider how these AI tools align with our criteria for collaborative intelligence. The LLMs meet the first criterion for collaborative intelligence because they complement human workers through their superior computational power, which enables them to draw upon and integrate an enormous body of data. Experimental research has already demonstrated that this complementarity improves outcomes ranging from productivity, quality and customer satisfaction (Brynjolfsson, Li, and Raymond Citation2023; Dell’acqua et al. Citation2023; Noy and Zhang Citation2023). Furthermore, LLMS are capable of understanding and supporting humans at multiple stages in the workflow (from brainstorming, content creation, editing and adapting) (Irons et al. Citation2023). LLMs are known for their highly naturalistic conversational capability. When prompted to, they are able to draw upon the history of their interactions with the human user when formulating responses, enabling sustained interaction related to the objective defined by the human user. In addition, they can adapt their output in response to feedback or prompts provided by the human. In this respect, the final output can (even if it does not always) represent a fairly indivisible integration of human and AI contributions. However, due to the large and varying number of LLM models that are available now, classifying them as a single application against the three criteria used to define collaborative intelligence applications in the current study, does not account for the complexity and variation between them. Therefore, we conclude that while LLMs largely embody the defining characteristics of collaborative intelligence, further examination in future research is warranted. The massive interest in LLMs is testimony to the value of AI systems that can complement a human with their greater computational power, understand and communicate with a human about a shared objective and contribute to a shared output. These tools also illustrate the value of communication channels that allow the human to provide contextual information and identify refinements or improvements to the AI’s output that are needed to solve the task at hand.

Current Constraints and Future Directions for Collaborative Intelligence

The collaborative intelligence applications identified in this review were restricted in terms of the range of environments in which they could operate and the variety of tasks that they could perform. Whereas human to human collaborators often make frequent switches between collaborating in the real world (e.g., to scope requirements in an initial meeting) and collaborating virtually (when they write a document together or share information and advice via online channels), the current collaborative intelligence applications do not have this agility. Collaborative intelligence applications can work with humans in a virtual environment (e.g., generating digital content) or in the real world where they take a cyber-physical form. However, while the cyber-physical forms of collaborative intelligence often communicate with their human collaborator via online channels, they are generally not capable of collaborating with them in this environment. Whereas human workers will often collaborate over multiple stages of a project or task, the collaborative intelligence applications were limited to discrete tasks and stages of production or decision-making. This uniquely human ability, the capacity to transfer capability in one domain to a related domain, will be a key challenge in the further development of future collaborative intelligence applications.

The review also revealed hundreds of applications that met some but not all the criteria for collaborative intelligence. These findings suggests that future applications of collaborative intelligence will emerge in finance, investment, and insurance; defense and security operations; and within scientific research. We expect developments in the field of collaborative intelligence to accelerate rapidly as a result of the availability of LLMs. The pace of LLM adoption across tasks and fields is increasing the visibility of the potential benefits of AI and our willingness to engage in collaborative work processes with AI. Finally, the interdependent and interactive nature of collaborative AI applications presents an imperative to examine the ethics aspects of responsible implementation and design of such applications (Abeywickrama and Ramchurn Citation2024; Seeber et al. Citation2020)

Limitations

One limitation of this study is that it is based on published academic and gray literature written in English. There are likely to be more collaborative intelligence applications under development not captured in the literature because they are not yet ready for commercialization, and represent valuable intellectual property. Patent databases may offer another fruitful dataset for identifying additional examples of collaborative intelligence.

A second issue that affected the review was the lack of detailed information on all the potential collaborative intelligence applications that were identified. Several AI applications potentially met the criteria for collaborative intelligence but were not described in sufficient detail to be assessed fully. Thus, while this paper provides the important foundations for documenting current applications of collaborative intelligence, it may not capture the full range of applications that currently exist.

Conclusion

The dominance (up to now) of AI applications that automate work can be attributed to the fact that collaborative AI requires advanced capabilities such as the ability to model the human view of the world and engage in dialogue with a human collaborator. The latest wave of AI is developing these capabilities (Stowers et al. Citation2021). The complementarities of human intelligence and AI and the range of benefits already associated with collaborative intelligence applications suggests that collaborations between human intelligence and AI can catalyze a new wave of innovation that enables more efficient, safer, sustainable and enjoyable work and lives (Nahavandi Citation2019).

Acknowledgments

We acknowledge the support of the CSIRO Collaborative Intelligence (CINTEL) Future Science Platform.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was funded by the Commonwealth Scientific and Industrial Research Organisation’s Collaborative Intelligence Future Science Platform.

References

  • Abeywickrama, D. B., and S. D. Ramchurn. 2024. Engineering responsible and explainable models in human-agent collectives. Applied Artificial Intelligence 38 (1):2282834. doi:10.1080/08839514.2023.2282834.
  • Agrawal, A., J. Cleland-Huang, and J. P. Steghöfer. 2020. Model-driven requirements for humans-on-the-loop multi-UAV missions. Paper presented at the 2020 IEEE Tenth International Model-Driven Requirements Engineering (MoDRE). Aug 31-31. p.1-10, Article 9233025.
  • Agrawal, A., J. S. Gans, and A. Goldfarb. 2019. Exploring the impact of artificial intelligence: Prediction versus judgment. Information Economics and Policy 47:1–23. doi:10.1016/j.infoecopol.2019.05.001.
  • Akata, Z., D. Balliet, M. D. Rijke, F. Dignum, V. Dignum, G. Eiben, and M. Welling, D. Grossi, K. Hindriks, H. Hoos. 2020. A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer 53 (8):18–28. doi:10.1109/MC.2020.2996587.
  • Arksey, H., and L. O’Malley. 2005. Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology 8 (1):19–32. doi:10.1080/1364557032000119616.
  • Asfour, T., L. Kaul, M. Wächter, S. Ottenhaus, P. Weiner, S. Rader, and H. Haubert 2018. ARMAR-6: A collaborative humanoid robot for industrial environments. Paper presented at the 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids). Beijing, China, Nov 6-9.
  • Avdeeff, M. 2019. Artificial intelligence & popular music: SKYGGE, flow machines, and the audio uncanny valley. Arts 8 (4):130. doi:10.3390/arts8040130.
  • Battina, D. S. 2018. The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. International Journal of Innovations in Engineering Research and Technology 5 (7): 40–47.
  • Bettoni, A., E. Montini, M. Righi, V. Villani, R. Tsvetanov, S. Borgia, and C. Secchi, E. Carpanzano. 2020. Mutualistic and adaptive human-machine collaboration based on machine learning in an injection moulding manufacturing line. Procedia CIRP 93:395–400. doi:10.1016/j.procir.2020.04.119.
  • Billman, D., G. Convertino, J. Shrager, P. Pirolli, and J. Massar. 2006. Collaborative intelligence analysis with CACHE and its effects on information gathering and cognitive bias. Paper presented at the Human Computer Interaction Consortium Workshop: Fraser, Colorado, USA.
  • Birhane, A., P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao. 2022. The values encoded in machine learning research. Paper presented at the Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea. doi:10.1145/3531146.3533083.
  • Brynjolfsson, E., D. Li, and L. R. Raymond. 2023. Generative AI at work. National Bureau of Economic Research Working Paper Series 31161: 1–65.
  • Cabitza, F., A. Campagner, and C. Simone. 2021. The need to move away from agential-AI: Empirical investigations, useful concepts and open issues. International Journal of Human-Computer Studies 155 (C):11. doi:10.1016/j.ijhcs.2021.102696.
  • Cesta, A., A. Orlandini, and A. Umbrico (2018, May 16–18). Fostering robust human-robot collaboration through AI task planning. Paper presented at the 51st CIRP Conference on Manufacturing Systems (CIRP CMS), Stockholm, SWEDEN.
  • Cienki, A. 2015. Insights into coordination, collaboration, and cooperation from the behavioral and cognitive sciences: A commentary. Interaction Studies 16 (3):553–60. doi:10.1075/is.16.3.09cie.
  • Coenen, A., L. Davis, D. Ippolito, E. Reif, and A. Yuan. 2021. Wordcraft: A human-AI collaborative editor for story writing, Ithaca: Cornell University Library. arXiv.org.
  • Daugherty, P. R., and H. J. Wilson. 2018. Human + machine: Reimagining work in the age of AI. Brighton, USA: Harvard Business Review Press.
  • Dell’acqua, F., E. McFowland, E. R. Mollick, H. Lifshitz-Assaf, K. Kellogg, S. Rajendran, and K. R. Lakhani (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality Harvard business school technology & operations Mgt. Unit Working Paper No. 24-013. doi:10.2139/ssrn.4573321
  • Dellermann, D., A. Calma, N. Lipusch, T. Weber, S. Weigel, and P. Ebel. 2019. The future of human-AI collaboration: A taxonomy of design knowledge for hybrid intelligence systems. Proceedings of the Annual Hawaii International Conference on System Sciences, Vol. 2019, p.274–283, Hawaii, USA.
  • De Luca, G. 2021. The development of machine intelligence in a computational universe. Technology in Society 65:101553. doi:10.1016/j.techsoc.2021.101553.
  • De, M., L. Nul, A. Petridis, and European Commission, Directorate-General for Research Innovation, Breque. 2021. Industry 5.0: Towards a sustainable, human-centric and resilient European industry. Brussels, Belgium: Publications Office.
  • Diao, J. A., R. J. Chen, and J. C. Kvedar. 2021. Efficient cellular annotation of histopathology slides with real-time AI augmentation. NPJ Digital Medicine 4 (1). doi:10.1038/s41746-021-00534-0.
  • Dimitropoulos, N., T. Togias, G. Michalos, and S. Makris. 2020. Operator support in human–robot collaborative environments using AI enhanced wearable devices. Procedia CIRP 97:464–69. doi:10.1016/j.procir.2020.07.006.
  • Dimitropoulos, N., T. Togias, N. Zacharaki, G. Michalos, and S. Makris. 2021. Seamless human–robot collaborative assembly using artificial intelligence and wearable devices. Applied Sciences 11 (12):5699. doi:10.3390/app11125699.
  • Dubey, A., K. Abhinav, S. Jain, V. Arora, and A. Puttaveerana. 2020. HACO: A framework for developing human-AI teaming. ACM International Conference Proceeding Series, 2020, Article 3385044, Jabalpur, India, doi:10.1145/3385032.3385044.
  • Epstein, S. L. 2015. Wanted: Collaborative intelligence. Artificial Intelligence 221:36–45. doi:10.1016/j.artint.2014.12.006.
  • Feldman, S. 2017. Co-creation: Human and AI collaboration in creative expression. Paper presented at the Proceedings of the conference on Electronic Visualisation and the Arts, London, United Kingdom. doi:10.14236/ewic/EVA2017.84.
  • Festo. 2018. Bionicworkplace: Human-robot collaboration with artificial intelligence. Paper presented at the Proceedings of 2018 International Conference on Hydraulics and Pneumatics, Romania.
  • Gao, X., L. Yan, G. Wang, and C. Gerada. 2021. Hybrid recurrent neural network architecture-based intention recognition for human-robot collaboration. IEEE Transactions on Cybernetics, 1–9. doi:10.1109/TCYB.2021.3106543.
  • Goldberg, S., S. Belyaev, and V. Sluchak. 2021. Dr. Watson type artificial intellect (AI) systems, Ithaca: Cornell University Library. arXiv.org.
  • Goldfarb, A., and J. Lindsay. 2020. Artificial Intelligence in War: Human Judgment As an Organizational Strength and a Strategic Liability. Washington D.C., USA: https://www.brookings.edu/wp-content/uploads/2020/11/fp_20201130_artificial_intelligence_in_war.pdf.
  • Hackman, J. R., and G. R. Oldham. 1975. Development of the job diagnostic survey. Journal of Applied Psychology 60 (2):159–70. doi:10.1037/h0076546.
  • Hart, S. N., N. G. Hoffman, P. Gershkovich, C. Christenson, D. S. McClintock, L. J. Miller, and R. Jackups, V. Azimi, N. Spies, V. Brodsky. 2023. Organizational preparedness for the use of large language models in pathology informatics. Journal of Pathology Informatics 14:100338. doi:10.1016/j.jpi.2023.100338.
  • Irons, J., C. Mason, P. Cooper, S. Sidra, A. Reeson, and C. Paris. 2023. Exploring the impacts of ChatGPT on future scientific work. doi:10.31235/osf.io/j2u9x.
  • Jarrahi, M. H. 2018. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons 61 (4):577–86. doi:10.1016/j.bushor.2018.03.007.
  • Johnson, M., J. M. Bradshaw, P. J. Feltovich, C. M. Jonker, M. B. V. Riemsdijk, and M. Sierhuis. 2014. Coactive design: designing support for interdependence in joint activity. Journal of Human-Robot Interaction 3 (1):43–69. doi:10.5898/JHRI.3.1.Johnson.
  • Kärcher, N., M. Moerdijk, S. Schrof, C. Trapp, M. Purucker, M. Baltes, and R. Neumann. 2017. BionicCobot: Sensitive Helper for Human-Robot Collaboration. Germany: https://www.festo.com/PDF_Flip/corp/Festo_BionicCobot/en/files/assets/common/downloads/Festo_BionicCobot_en.pdf.
  • Kasparov, G. 2010. The Chess Master and the Computer. USA: https://www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/.
  • Kolbeinsson, A., E. Lagerstedt, and J. Lindblom. 2019. Foundation for a classification of collaboration levels for human-robot cooperation in manufacturing. Production and Manufacturing Research 7 (1):448–71. doi:10.1080/21693277.2019.1645628.
  • Langer, M., and R. N. Landers. 2021. The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior 123:106878. doi:10.1016/j.chb.2021.106878.
  • Liberati, A., D. G. Altman, J. Tetzlaff, C. Mulrow, P. C. Gøtzsche, J. P. A. Ioannidis, and M. Clarke, P. J. Devereaux, J. Kleijnen, D. Moher. 2009. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: Explanation and elaboration. BMJ 339 (jul21):b2700. doi:10.1136/bmj.b2700.
  • Lin, Y. Y., J. H. Guo, Y. Chen, C. Yao, F. T. Ying, and Acm. 2020. It is your turn: Collaborative ideation with a Co-creative robot through sketch. Paper presented at the CHI Conference on Human Factors in Computing Systems (CHI), Electr Network. Honolulu, HI, USA, Apr 25-30.
  • Maddikunta, P. K. R., Q.-V. Pham, N. Deepa, K. Gadekallu, T. R. Dev, and M. Liyanage, R. Ruby, M. Liyanage. 2022. Industry 5.0: A survey on enabling technologies and potential applications. Journal of Industrial Information Integration 26:100257. doi:10.1016/j.jii.2021.100257.
  • Madni, A. M., and C. C. Madni. 2018. Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. Systems 6 (4):44. doi:10.3390/systems6040044.
  • Mason, C. M., M. Ayre, and S. M. Burns. 2022. Implementing industry 4.0 in Australia: Insights from advanced Australian manufacturers. Journal of Open Innovation: Technology, Market, and Complexity 8 (1):53. doi:10.3390/joitmc8010053.
  • McDermott, P. N. A. I. N., C. Dominguez, N. Kasdaglis, M. Ryan, I. Trahan, and A. Nelson. 2018. Human-Machine Teaming Systems Engineering Guide. USA: https://apps.dtic.mil/sti/pdfs/AD1108020.pdf.
  • Nahavandi, S. 2019. Industry 5.0—A human-centric solution. Sustainability 11 (16):4371. doi:10.3390/su11164371.
  • Noy, S., and W. Zhang. 2023. Experimental evidence on the productivity effects of generative artificial intelligence. doi:10.2139/ssrn.4375283.
  • O’Brien, M. 2017. Nightmare Machine Writes Bone-Chilling Tales. Halifax, N.S: 08281807). https://www.proquest.com/docview/1958636667?accountid=26957.
  • Oh, C., J. Song, J. Choi, S. Kim, S. Lee, and B. Suh. 2018. I lead, you help but only with enough details: Understanding user experience of Co-creation with artificial intelligence. Paper presented at the Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal QC, Canada. doi:10.1145/3173574.3174223
  • Ostheimer, J., S. Chowdhury, and S. Iqbal. 2021. An alliance of humans and machines for machine learning: Hybrid intelligent systems and their design principles. Technology in Society 66:101647. doi:10.1016/j.techsoc.2021.101647.
  • Pachet, F., P. Roy, and B. Carré. 2021. Assisted music creation with flow machines: Towards new categories of new. In Handbook of artificial intelligence for music: Foundations, advanced approaches, and developments for creativity, ed. E. R. Miranda, 485–520. Cham: Springer International Publishing.
  • Parker, S. K., and G. Grote. (n/a). Automation, algorithms, and beyond: Why work design matters more than ever in a digital world. Applied Psychology. doi:10.1111/apps.12241.
  • Poser, M., and E. A. Bittner (2020). Hybrid teamwork: Consideration of teamwork concepts to reach naturalistic interaction between Humans and conversational agents. Paper presented at the Wirtschaftsinformatik (Zentrale Tracks).
  • Rachatasumrit, N., G. Ramos, J. Suh, R. Ng, and C. Meek. 2021. ForSense: Accelerating online research through sensemaking integration and machine research support. International Conference on Intelligent User Interfaces, Proceedings IUI, 2021, p.608-618, College Station, TX, USA.
  • Saffiotti, A., P. Fogel, P. Knudsen, L. de Miranda, and O. Thörn. 2020. On human-AI collaboration in artistic performance. CEUR Workshop Proceedings, Vol.2659, p.38-43, 17-20 June 2020, St. Petersburg, Russia.
  • Sartori, L., and A. Theodorou. 2022. A sociotechnical perspective for the future of AI: Narratives, inequalities, and human control. Ethics and Information Technology 24 (1):4. doi:10.1007/s10676-022-09624-3.
  • Schuh, G., R. Anderl, R. Dumitrescu, A. Krüger, and M. T. Hompel. 2020. Industrie 4.0 Maturity Index: Managing the Digital Transformation of Companies – Update 2020. Germany: https://en.acatech.de/publication/industrie-4-0-maturity-index-update-2020/.
  • Seeber, I., E. Bittner, R. O. Briggs, T. de Vreede, G.-J. de Vreede, A. Elkins, and M. Söllner, A. B. Merz, S. Oeste-Reiß, N. Randrup, G. Schwabe. 2020. Machines as teammates: A research agenda on AI in team collaboration. Information & Management 57 (2):103174. doi:10.1016/j.im.2019.103174.
  • Shneiderman, B. 2020. Human-centered artificial intelligence: Three fresh ideas. AIS Transactions on Human-Computer Interaction 12 (3):109–24. doi:10.17705/1thci.00131.
  • Shook, E., and M. Knickrehm. 2018. Reworking the Revolution. https://www.accenture.com/_acnmedia/pdf-69/accenture-reworking-the-revolution-jan-2018-pov.pdf.
  • Sindhwani, R., S. Afridi, A. Kumar, A. Banaitis, S. Luthra, and P. L. Singh. 2022. Can industry 5.0 revolutionize the wave of resilience and social value creation? A multi-criteria framework to analyze enablers. Technology in Society 68:101887. doi:10.1016/j.techsoc.2022.101887.
  • Stowers, K., L. L. Brady, C. MacLellan, R. Wohleber, and E. Salas. 2021. Improving teamwork competencies in human-machine teams: Perspectives from team science. Frontiers in Psychology 12:12. doi:10.3389/fpsyg.2021.590290.
  • Szász, L., K. Demeter, B.-G. Rácz, and D. Losonci. 2021. Industry 4.0: A review and analysis of contingency and performance effects. Journal of Manufacturing Technology Management 32 (3):667–94. doi:10.1108/JMTM-10-2019-0371.
  • Thompson, C. 2013. Smarter than you think: How technology is changing our minds for the better. East Rutherford: Penguin Publishing Group.
  • Thörn, O., P. Knudsen, and A. Saffiotti (2020, 31 Aug). Human-robot artistic Co-creation: A study in improvised robot dance. Paper presented at the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy.
  • Traumer, F., S. Oeste-Reiß, and J. M. Leimeister. 2017. Towards a future Reallocation of Work between Humans and machines – taxonomy of tasks and interaction types in the context of machine learning. Paper presented at the Thirty Eighth International Conference on Information Systems, South Korea.
  • Urban Davis, J., F. Anderson, M. Stroetzel, T. Grossman, and G. Fitzmaurice. 2021. Designing Co-creative AI for virtual environments. In Creativity and Cognition (C&C ’21), June 22, 23, 2021, Virtual Event, Italy. doi:10.1145/3450741.3465260.
  • van der Wal, D., I. Jhun, I. Laklouk, J. Nirschl, L. Richer, R. Rojansky, and A. Esteva, J. Wheeler, J. Sander, F. Feng, O. Mohamad. 2021. Biological data annotation via a human-augmenting AI-based labeling system. NPJ Digital Medicine 4 (1):145. doi:10.1038/s41746-021-00520-6.
  • Veile, J. W., D. Kiel, J. M. Müller, and K.-I. Voigt. 2020. Lessons learned from industry 4.0 implementation in the German manufacturing industry. Journal of Manufacturing Technology Management 31 (5):977–97. doi:10.1108/JMTM-08-2018-0270.
  • Wang, D., E. Churchill, P. Maes, X. Fan, B. Shneiderman, Y. Shi, and Q. Wang. 2020. From human-human collaboration to human-AI collaboration: Designing AI systems that can work together with people. Paper presented at the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA. doi:10.1145/3334480.3381069.
  • Wolf, F. D., and R. M. Stock-Homburg. 2023. How and when can robots be team members? Three decades of research on human–robot teams. Group & Organization Management 48 (6):1666–1744. doi:10.1177/10596011221076636.
  • Yanardag, P., M. Cebrian, and I. Rahwan. 2021. Shelley: A crowd-sourced collaborative horror Writer. In Proceedings of the 13th Conference on Creativity and Cognition (C&C ‘21). Association for Computing Machinery, New York, NY, USA, Article 11, 1–8. doi:10.1145/3450741.3465251.
  • Zhang, C., C. Yao, J. Liu, Z. Zhou, W. Zhang, L. Liu, F. Ying, Y. Zhao, and G. Wang. 2021. StoryDrawer: A Co-creative agent supporting children’s storytelling through collaborative drawing. Conference on Human Factors in Computing Systems - Proceedings. CHI ‘21: CHI Conference on Human Factors in Computing Systems Yokohama Japan. doi:10.1145/3411763.3451785.