Publication Cover
Labour and Industry
A journal of the social and economic relations of work
Volume 33, 2023 - Issue 3
833
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Human-AI interaction in remanufacturing: exploring shop floor workers’ behavioural patterns within a specific human-AI system

, &
Pages 344-363 | Received 25 Oct 2022, Accepted 20 Aug 2023, Published online: 29 Aug 2023

ABSTRACT

Artificial intelligence (AI) is increasingly discussed as an innovation enabler for the enhancement of circular economy (CE) approaches in industries. The further deployment of intelligent technologies is considered to be very promising particularly in remanufacturing, which can be regarded as an implementation approach of CE at a firm level. AI’s potential to contribute to advancements in remanufacturing can be traced back to these modern technologies’ extended capacities of supporting and assisting humans during rather manual processes which are regarded as more common in remanufacturing than in traditional linear production. As a result, we argue that in future application scenarios, humans are going to interact more often with AI agents who may direct and assist humans’ behaviour and decision-making processes. We assume that a better understanding of the specific dynamics and novel aspects of these kind of newly emerging human-AI systems is a key prerequisite for sustainable process innovation, particularly in remanufacturing organisations. However, empirical-based contributions about humans’ behavioural changes in interaction with AI agents have so far been rather rare and limited, especially in the field of remanufacturing and CE. In this article, we seek to contribute to this gap in research by exploring the interaction between shop floor workers and an AI agent based on a case study research approach at a plant of a German automotive supplier that is remanufacturing used parts. We conducted semi-structured interviews among the shop floor workers who are involved in a joint decision-making task with an AI agent. We interpret the findings of our qualitative data in the light of related research in the field of AI in CE, AI implementation in organisation and human-AI interaction literature. In summary, our analysis reveals 13 behavioural patterns that shop floor workers reported on referring to their interaction with the AI agent. The behavioural patterns are systemised into a cognitive, emotional and social dimension of a competence framework. These findings shall contribute to a more specific understanding about how humans interact with AI agents at work, while considering the specific context variables of the interaction paradigm and the AI agent’s role during joint decision-making in a human-AI system. Implications for literature in the field of human-AI interaction as well as AI implementation in organisations with a particular focus on CE are discussed.

Introduction

Remanufacturing and a circular economy (CE) are of great economic and societal importance, and the adaptation of AI technologies in this field is considered to be very promising (Agrawal et al. Citation2021; Sankaran Citation2019; Yeh et al. Citation2021). The utilisation of these modern technologies offers new opportunities beyond the automation of existing processes particularly in the specific field of remanufacturing, where value creation is based more on manual processes. The potential of humans and AI agents working together as a team is also seen to be very promising in this field. Collaborative robots (cobots), for example, can assist humans in disassembling (Li et al. Citation2018) or AI agents based on artificial neural networks can support with decisions concerning the visual inspection of used materials (Schlüter et al. Citation2021). However, besides the potentials, there are also challenges or reservations on the part of society and academia regarding the unpredictability and uncontrollability of AI systems such as for example, ethical issues, inadequate handling of sensitive data or insufficient understanding on the part of the human on how these tools work or decide (Buiten Citation2019; Yudkowsky Citation2008). Recently with the raising adoption of increasingly smart and human-like tools like chatGPT more demands emerge for adequate regulation and surveillance of AI developments (Margetts Citation2022; Samuel Citation2021). The argument is that unregulated AI may produce various social, economic and political harms such as for example, damaged competition, consumer privacy and consumer choice, excessively automated work and endangerment of people’s jobs as well as increasing inequality and damaged political discourse (Acemoglu Citation2021; Borenstein and Howard Citation2021; Talboy and Fuller Citation2023)

However, since human-AI systems can contribute to the further innovation of work processes in traditional production and remanufacturing, the enhancements of such systems are becoming key elements of companies’ core competencies (Sankaran Citation2019). While a lot of research and discussion is focusing on the way in which technologies can be designed and developed appropriately to fulfil increasingly complex demands of sustainable businesses, knowledge about the successful implementation and introduction of these technologies in working environments is still rather fragmented and divers. This is especially so when it comes to the question about how employees, such as shop floor workers, actually interact with AI-based agents in these newly emerging human-AI systems (Anton et al. Citation2020; Hamm and Klesel Citation2021; Schelble et al. Citation2021; Seiffer et al. Citation2021).

This article seeks to contribute to this gap of knowledge about human actors’ behavioural patterns during their interaction with an AI-based agent in remanufacturing. We conducted a single case study research to address this research question. Several interviews with shop floor employees were conducted at a specific plant of a large remanufacturing company in Germany in the second half of 2021. The company has a long tradition as a supplier in the automotive industry. This specific plant’s business approach is focused entirely on remanufacturing used products to contribute to waste reduction and foster the reuse of limited resources. The implementation of an AI-based agent was initiated about two years ago to innovate internal processes. The AI-based agent is located in the quality control department of the company and performs a specific visual recognition task in close interaction with the employees. More precisely, the human-AI system involving humans and AI agents performs a shared decision-making process about whether used material should be reused or disposed of. One peculiarity of the case is that initially the AI agent was only capable of performing basic parts of the task and, thus, is expanding its capabilities over time with the support of its human partners.

Our empirical findings show a set of 13 behavioural patterns which we identified through analysing the interview material. The patterns identified are either cognitive, emotional or social in nature. They range from developing an individual understanding of how AI works in general, through taking one’s own initiative for the improvement of the AI agent, up to being patient with the new artificial partner. Utilizing these findings, we contribute to a better understanding about how humans interact with modern AI-based agents during collaborative decision-making processes in organisations.

Background and related work

AI as an enabler for a circular economy

The concept of a CE has gained increasing attention over the last few decades as it is considered to be a promising approach that offers new solutions for ecological challenges and the sustainability of today’s and especially tomorrow’s industries and societies. We broadly understand a CE as a holistic approach that focuses on the reduction of resource and energy consumption in order to decrease the environmental impact of businesses and consumption. The key principle is seen in the creation and management ‘of circular loops of materials, energy, and waste flows’ (Masi et al. Citation2018, 543). This is a fundamental shift away from a linear logic of value creation towards a circular concept. However, this contributes to a further increase of complexity of business activities because, for example, the level of dynamic interaction and interconnection has to be enhanced to gain more knowledge about how resources are actually used, consumed or wasted. The CE is an overarching concept that demands great support and initiative from business, society and politics. Thus, the initiatives and transformational challenges regarding a CE are mainly related to three different levels: the micro or firm level, the meso or network level and the macro level, including policy-making and further regulations (see e.g. Grafström and Aasma Citation2021; Yuan et al. Citation2008). In this article, we refer to the microlevel of CE-related initiatives that, among others, includes firms’ specific practices of remanufacturing (Masi et al. Citation2018). The case company where we conducted our research had already adjusted its business model towards a remanufacturing approach. It has been revealed that particularly traditional firms have to cope with a great number of challenging barriers to successfully transform a business towards a CE approach such as remanufacturing. Grafström and Aasma (Citation2021) argue that, among other challenges, there are critical technological barriers at a company level, for example, a lack of appropriate information technology systems required for data collection in the context of increasingly complex interconnections (see also Salmenperä et al. Citation2021). Also, the lack of appropriate governmental regulations and surveillance of AI development to prevent potential harms are discussed in this context (Acemoglu Citation2021; Margetts Citation2022; Talboy and Fuller Citation2023). The critical demand for more enhanced digital technologies and particularly for AI as a crucial enabler for a CE has been emphasised by several contributions (Kerin and Pham Citation2019; Ramadoss et al.,Citation2018; Sankaran Citation2019). The adoption of AI techniques, such as machine learning or big data analytics, are seen to have a significant impact on innovation processes towards CE at a firm level and beyond (Agrawal et al. Citation2021).

The deployment of smart technologies is considered to be very promising, particularly in the field of remanufacturing, since operations in this field consist mostly of manual processes and, therefore, automation through AI is seen as a crucial enabler for remanufacturing as a whole (Blömeke et al. Citation2020; Kerin and Pham Citation2019). Thus, there are several ideas on how to utilise smart technologies in remanufacturing. Regarding the effective disposal of end-of-life products, for example, artificial neural networks can be used to optimise disassembly sequences (Li et al. Citation2019) and cobots to support humans with the actual disassembly (Li et al. Citation2018). Furthermore, the deployment of visual recognition appears to be promising for the quality control of used parts, for example, to detect defects, since the process of inspection can be made more standardised and objective and workers can receive support from AI agents as a second opinion, employing the four eyes principle (Schlüter et al. Citation2021; Tsimba et al. Citation2021).

Due to these and other potential benefits AI-based systems may have for CE in general and remanufacturing approaches at a firm level in particular, an increasing number of organisations are considering the implementation of this technology to support and enable efficient remanufacturing processes. However, this recent perspective on the economic and ecological potentials of AI often lacks a profound analysis and reflection about how the introduction of modern technology changes the organisation as such and potentially affects its employees, as the understanding about the interaction between humans and AI agents is supposed to become even more relevant for remanufacturing-oriented businesses (Hamm and Klesel Citation2021).

Implementation of AI in organisation

From an organisational management perspective, AI can be regarded as a new technological generation that is capable of gathering external information, interpreting this information, generating valuable results, and, finally, evaluating its own actions and self-improving its own decision system in order to achieve specific goals (Ferràs-Hernández Citation2018; Glikson and Woolley Citation2020). Based on this capability-oriented perspective, AI is seen to have great opportunities for business organisations in general and for more sustainable value creation concepts, such as CE, in particular. This is one reason for the increasing investments in AI development, dissemination and implementation in organisations (Agrawal et al. Citation2021). However, to date, less than 1% of the companies in Germany have implemented AI successfully in such a way that employees are interacting with this new technology to perform actual value creation processes (e.g. Giering Citation2021). The reasons for this situation are manifold (Hamm and Klesel Citation2021; Pumplun et al. Citation2019). On the one hand, the efforts and the risks for AI development and implementation are still considered to be high compared to the potential benefits to be gained (Ahlborn et al. Citation2019; Rammer et al. Citation2020). On the other hand, the rather dominant view of AI from an economic or technological perspective has led to situations where managers and decision-makers tend to lose sight of the critical obstacles and challenges during the introduction and implementation of this new technology in organisations (Massmann and Hofstetter Citation2020). From an employees’ perspective, there are obstacles, such as the fear of losing one’s job, of being directed and fully monitored at work, or of not being well prepared or educated for the interaction with AI agents. In addition, employees may also respond with distrust regarding AI’s actual capabilities, decision quality or security (Zicari et al. Citation2021; Zweig Citation2019). Furthermore, in many cases, AI systems are not capable of fulfilling the high expectations of managers and decision-makers in organisations (Brynjolfsson and Mitchell Citation2017).

However, despite these and other challenges and obstacles, there is a general consensus among researchers and practitioners that AI will play an increasingly relevant role within organisations’ future value creation processes (Brock and von Wangenheim Citation2019). This affects organisations across all levels, also changing employees’ individual work processes (Seeber et al. Citation2018; Wilson and Daugherty Citation2018). While the usage of AI-based opportunities in the field of rationalisation and automation are often accompanied by replacements of employees by modern technology, there is an increasing number of examples where humans are becoming counterparts or collaborators of AI agents at work (Wang et al. Citation2019; Wanner et al. Citation2019; Wilson and Daugherty Citation2018). This new form of interaction between employees and AI contributes to the emergence of human-AI systems in organisations. Even if the use of AI sometimes leads to the impression that humans are becoming obsolete, many AI-related tasks, such as machine learning, still need the human as an interaction partner, for example, for feature engineering, or preparing or labelling training data (Dellermann et al. Citation2019; Seeber et al. Citation2018). These systems of the socio-technological interaction of humans and AI need further research from different perspectives in order to understand them in more detail.

Human-AI interaction

Scientific research in the field of human-AI interaction is still rather novel, explorative and diverse across a couple of disciplines. This is shown by the fast emergence of new concepts and constructs in recent scientific literature, such as human-AI hybrids, human-AI collaboration, human-AI systems or human-AI partnerships, to mention just a few. These concepts are often used rather interchangeably due to a lack of appropriate theoretical frameworks and underpinnings from empirical-based research in organisations. In this article, we refer to Schelble et al. (Citation2021), who define a human-AI system as an ensemble where at least one human and at least one AI agent are collaborating with each other, for example, making decisions together. Regarding the interaction within this human-AI system, Rzepka and Berger refer to the concept of a user’s interaction with AI as ‘the actual use of the system by the user, as well as the cognitive evaluations that precede the user’s behaviour’ (Rzepka and Berger Citation2018, 4). In our study, we use these definitions to investigate behavioural patterns regarding the interaction between humans and AI agents within a human-AI system from the perspective of the human actors.

Recent research in the field of human-AI interaction has revealed that there are a number of influencing factors affecting and guiding human actors’ behaviour during their interaction with AI. The factors identified so far are very diverse. These can range from users’ characteristics, such as the users’ demographics, personality and experience, or the users’ perceptions of different characteristics of the AI system or the way of interaction with it (Rzepka and Berger Citation2018). Starting from the task the AI system has to fulfil, Hinsen et al. (Citation2022) identify five different types of human-AI interaction: guardian angel, pixie, informant, colleague and best friend. They can be described by various dimensions, such that there might be different levels of transparency of the interaction. The direction of the action can go from the AI system to the human, vice versa or in both directions, the impulse for beginning and reasoning a new interaction reaches from targeted to playful, and the results can be informing, assisting, advising or experiencing. These types of interaction can be mapped considering the freedom of action and the reciprocal engagement, so that three different roles of AI occur: AI as an automaton, a versatile helper and a partner. Similar to this classification, Bittner et al. (Citation2019) figure out three different roles of conversational intelligent agents: a facilitator, that guides users to a goal by executing tasks, a peer, as a partner for an individual, and an expert, that also satisfies spontaneous and creative tasks such as chitchat. Another systematisation is introduced by van Berkel et al. (Citation2021), who point out three interaction paradigms of human-AI interaction we found appropriate as conceptual framing for our case study analysis. According to them, the interaction can follow a dialogue, commentary or prescription paradigm, depending on the initiator of the interaction (AI, human or environment), the trigger for the AI input, the resulting AI response and the users’ response. They outline that in dialogue-oriented interaction, the user provides an input for an AI which is giving some response to which the user is finally reacting. In commentary-oriented interaction, the user is instead continuously providing input to the AI and can alternatively react or ignore the suggestions given by the AI. Finally, van Berkel et al. refer to the prescription-oriented interaction as a process where the input for AI-based decisions is received not from a user but another technical agent. The AI’s decisions may then create awareness in the user and could result in user reactions.

Referring to the expectation from the interaction with an AI agent, Papachristos et al. (Citation2021) determine four roles of AI: mirror, assistant, guide and oracle. They differentiate according to who makes the decision or performs the task, even if the AI’s suggestion or the knowledge of the human is unsure. Research in the field of medical decision-making has revealed specific human tasks which are becoming more relevant in human-AI interaction. These are verifying the AI’s decision suggestions, improving the AI system, learning from the AI and taking responsibility for the joint decisions finally taken (Waefler and Schmid Citation2021). In summary, the wide variance of research in the area of human-AI interaction shows the high dimensionality and complexity of this phenomena. The findings have in common that human-AI interaction depends on the design and the task demanded which the AI agent has to perform. Consequently, specific competencies may also be required from the human actor participating (Markauskaite et al. Citation2022; Süße et al., Citation2021).

According to Boyatzis and Boyatzis (Citation2008), human actors’ competencies can be defined as a set of behaviours and intents that enable a person to cope with a certain situation. They are a behavioural approach to cognitive, emotional and social intelligence. Hence, competencies can be clustered into a cognitive, emotional and social dimension. Based on this structure, Süße et al. (Citation2021) suggest a framework of nine AI-related competencies that reaches from an understanding and interpretation of AI impulses to negotiating one’s own recovery phases with AI agents. This understanding of competencies is in line with Bassellier (Citation2001), who argues that competencies are ‘the potential that leads to an effective behaviour’ (p. 162). Thus, a human’s behavioural patterns during interaction with an AI agent can reflect his or her competencies together with a certain knowledge representation and understanding of AI. While recent literature has already outlined the importance of human actors’ competencies as critical success factors for the adoption and implementation of AI in organisations (Hamm and Klesel Citation2021; Pumplun et al. Citation2019), there is a lack of empirical research in this field when it comes to specific groups of employees or the specific context of remanufacturing. Our case study begins to fill this emerging research gap by focusing on humans’ behavioural patterns in a human-AI-system with the help of a case study research approach.

Case study research

We performed a case study at a branch of a large automotive supplier in Germany. The company’s business model has been focused on remanufacturing used car clutches since the 1970s. More precisely, the company takes back used clutches, disassembles them and reuses appropriate parts to manufacture products that can continue to be utilised. In this way, around 95% of used materials can be reused.

We examined a distinct area of the remanufacturing process, namely, the quality control of used compression springs that are parts of the clutches. Shop floor workers had performed visual quality control of the springs manually until about two years ago. The company then introduced an AI agent, which has supported the employees since by carrying out this task. By introducing the AI agent, the company is mainly pursuing the goal of increasing the quality and productivity. Due to the fact that many older employees are currently employed in the springs’ quality control, one of the goals is also to retain the knowledge of the employees who will soon leave the company.

The peculiarity of the case is twofold. On the one hand, the AI system was introduced in the company with very ‘basic capabilities,’ i.e. the system was only able to classify simple defects of the springs. In the course of its use in the company, the AI agent has learned to recognise more complicated defects or cases. Therefore, it is improving and evolving over time. On the other hand, it is one of the very few cases in Germany where employees are actually interacting with an AI-based agent in a production environment. Recent studies in Germany revealed that less than 1% of German companies are applying AI in production environments due to several reasons (see Giering Citation2021).

The human-AI system

The participants of the in-depth study are six employees who are directly or indirectly involved with the AI agent. Three of them are shop floor workers, collaborating directly with the AI agent on a daily basis. One is the supervisor of the company’s branch to which the shop floor workers belong, and one is their direct foreman. Additionally, we interviewed the quality manager, who is responsible for defining the quality criteria for the springs. All participants are male with an average age of 45 years. Some of the workers have been instructed by the AI agent’s developer before they started working with the machine.

The AI agent is based on a deep neural network and is able to learn and perform a visual recognition task. It is put into operation with the capability of handling simple and obvious cases of a task independently and handing over complicated cases to the employees. The AI agent is trained repeatedly by providing an appropriate amount of training data. Thus, the number of cases which are handed over to the employees is decreasing over time because the AI agent is continuously improving. The AI agent has the physical appearance of a machine, consisting of a screen, a conveyor belt, a machine housing and a sorter. The compression springs are manually placed onto the conveyor belt and are then transported inside the machine housing. There, four cameras are installed within that produce images of the compression spring from different perspectives. Based on these images, the springs are evaluated by the AI algorithm. The images and the AI agent’s evaluation results are displayed on the screen. Positive evaluations are outlined in green, negative evaluations in red. There is also a number displayed, which can range from 000 to 099, where 000 stands for ‘spring can be wasted’ and 099 for ‘spring is perfectly ok.’ In the end, the springs are automatically sorted into different boxes. The sorter can handle up to five different evaluation ranges.

As has already been indicated above, the human-AI team’s joint task is to visually inspect used compression springs for quality and, thus, to decide collaboratively whether the spring can be reused or should be disposed of (see ). Therefore, firstly, the worker supplies the machine with the spring by putting it onto the conveyor belt. After that, the AI agent classifies the quality of the spring and sorts the springs into boxes depending on its confidence, for example, definitely waste, not sure, definitely reusable. The workers then recheck the boxes containing the springs that the AI agent was uncertain about and, finally, sort these springs into good and bad ones as well. They subsequently collect the springs the AI agent was either unsure about or those with defects that the AI agent is unable to recognise yet. Images are then taken of those springs by the AI agent, which are then saved and labelled at a later date by the workers. These images represent the training data. The workers provide valuable impulses for the targeted enrichment of the training data by constantly comparing their own evaluations with those of the AI agent. The actual training process is, however, executed by the AI’s developer.

Figure 1. Human-AI team.

Figure 1. Human-AI team.

Data collection and analysis

We gathered empirical data by conducting semi-structured interviews with the employees, who have already been characterised in section 3.1. Consequently, we developed an interview guideline based on a preliminary AI competencies framework deduced from the state-of-the-art research on challenges and opportunities in human-AI collaboration (Süße et al., Citation2021). The interview guideline contained simple questions focusing on the participant’s personal impressions and experiences of the collaboration with the AI agent. A few sample questions are:

  • ‘What are particular challenges you experience when working with the AI agent?’

  • ‘What do you like or dislike when working with the AI agent?’

  • ‘How does working with the AI agent differ from working with other machines?’

  • ‘What changes have you experienced since the AI agent was introduced? What have you learned since then?’

The interviews took place in the summer of 2021. Each of the interviews lasted between 30 and 45 minutes and was audio-recorded. After conducting the interviews, we transcribed the audio-files and subsequently performed a qualitative content analysis (Miles et al. Citation2020). We applied the software MAXQDA Version 2020 for the data analysis. We employed an iterative process during our analysis. Firstly, we analysed and coded the data independently from each other. After that, we discussed our interpretations, looked for relationships and patterns, constructed categories and grouped codes. In the process, we applied a mix of deductive and inductive coding (Hsieh and Shannon Citation2005) by assigning the codes to concepts, which were based on either our theoretical framework or insights that emerged during data collection and analysis. Additionally, feedback loops took place with some participants of the case company’s project management. We refined the codes iteratively until consensus was reached among all researchers participating.

Findings

By analysing our empirical data, we identified 13 distinct behavioural patterns the interviewees reported on when talking about their experiences during interaction with the AI agent. We refer to the theoretical framing of AI-related competencies (Süße et al., Citation2021) for the further systematisation of our results and clustered the 13 behavioural patterns into a cognitive, emotional and social dimension. With the help of example quotes, we seek to give further insights into the actual responses during the interviews and on how we interpreted and coded them during data analysis.

Cognitive dimension

By cognitive, we refer to behavioural patterns which indicate that the person, for example, aims to understand casual relationships, recognises patterns or deals critically with external input (Boyatzis and Boyatzis Citation2008). We identified four patterns through our iterative data analysis process that we argue fit best into the cognitive dimension.

C.1. Developing a general understanding of AI

Most workers developed their own overall understanding of what AI actually is during the two years of working together with the AI agent. They particularly related to their insights into how the AI agent works as they stated, for example: ‘[…] Photos are taken and the AI compares it with what has been stored, what it has been taught.’ The workers also pointed out how the AI-based machine differs from rather ‘traditional’ machines: ‘And then basically no real values are given out anyway, i.e. no measurements such that a measuring machine would do, but probabilities that are then calculated via the algorithm and that tells you, that’s now 50% good or 90% good.’ It was also important for them to relate to differences between a human and an AI: ‘[…] We as humans, we can always estimate it very roughly, and more by the feeling, by the look, by the haptics. The AI can’t do that. It orients itself on the pictures and every picture is different.’

The workers interviewed also reported on the opportunities of AI, for example, that it gives a higher ratio of comprehensible evaluations because its judgement does not depend that much on a worker’s individual point of view, experiences or the current physical or mental condition: ‘[…] the AI will certainly be better than the human in terms of the possibility of rejects or of the good parts or something. Because man, if I put three people there, everyone has a different point of view.’ However, there are also challenges, for example, the AI is seen to be less flexible, since with every new defect or every new kind of spring, one has to train it: ‘[…] I’m still a bit more flexible. If I have to check another kind of spring, I can quickly say, well, I’ll change everything, I have to, but I don’t know how I have to change the AI. Because it has to be trained in advance.’

C.2. Developing a context-specific understanding of the AI agent’s job task

From the workers’ point of view, an important requirement for a successful interaction with the AI agent is that one has to be familiar with the specific task the AI agent has to perform. Therefore, it is important to know the quality criteria of the actual task ‘checking the springs’ in order to be able to consider the AI agent’s evaluations during the shared decision-making processes: ‘[…] You should know from the product itself, what is put on the belt, how the product should look good or not.’ However, workers also reported that in similar cases, it is crucial to be able to understand how to deal appropriately with the AI agent’s decisions: ‘[…] You get a pre-evaluation on how the springs are. You can then approach it accordingly, and then think, ok, if the AI is not that sure, then maybe I can control a little more precisely.’

C.3. Developing a basic understanding of how the AI agent learns and improves

The workers reported on their understanding regarding what the training process of the AI agent may look like: ‘Our AI takes pictures and it learns from the pictures and then […] we let the AI know: “The images you see now are rated as OK, so must be rated as OK.” and then we’ll push them through. Then the images are saved by the AI and then it knows: “OK, the spring is good, the spring is good, the spring is good.” Vice versa, with the bad springs, which we send in again and inform the AI: “These are bad springs!” They have to be rated badly, and then we’ll send it through and then it’ll be rated worse.’

It was pointed out that the AI learns from pictures and that the learning process takes time because there are many possibilities for defects or many different kinds of springs. As a result, workers understand that an enormous amount of data is needed to train the AI: ‘Because there are always new situations. And the spectrum of parts is also large […] so there are so many possibilities and that just takes until it has learned it.’

The workers also mentioned that it is important to understand that the AI can only recognise defects it has already learned. The AI has to be trained again for every new kind of spring or new kind of defect: ‘You have to retrain or reteach it, that what it thought was ok until now is not ok now.’

C.4. Dealing with the AI agent’s decisions in a reflective manner

Another interesting aspect the workers reported on was that some of them are interacting more thoughtfully and critically with the AI agent’s decisions to be able to detect possible misevaluations and the reasons for them: ‘You definitely have to look at the images again. You also have to look at whether these are the appropriate springs, because if these are completely new springs the AI has never seen before, then the evaluation is rather random, because it doesn’t know the springs yet.’ This emphasises the importance of a shared decision process between human and AI.

Emotional dimension

The emotional dimension refers to patterns that are crucial to lead and manage oneself as an individual person. They include emotional self-awareness and -management. More precisely, this dimension consists of patterns reflecting the ability to understand one’s own feelings in a given situation, use this understanding to guide decision-making processes and have a realistic assessment of one’s own skills (Boyatzis et al. Citation2019). Elements such as emotional self-control and -awareness, adaptation, setting oneself challenging standards and continuously finding ways to improve are at the core of this dimension. Inspired by this theoretical framing, we identified five patterns through our iterative process of data analysis.

E.1 Considering the AI agent as a helpful counterpart

The shop floor workers considered the AI as very helpful because it supports them directly during a challenging task with its pre-evaluations: ‘Especially the current AI. It helps. It gives a preliminary evaluation and is super helpful.’ Thus, referring to the AI agent as a helpful counterpart relates to the complementarity of human and AI’s capabilities. We argue that the workers’ self-awareness about their own capabilities is a key prerequisite for this.

E.2. Being able to adapt and open to change and innovation

We recognised that the workers were generally very open towards the new technology. They welcomed the fact the AI agent was introduced into their company: ‘It’s actually a cool thing, because I find it very interesting to work with AIs. Even if more AIs are planned for the future, I’d love to work with them too.’

However, this positive attitude towards the AI agent emerged over time. At the very beginning, not all employees were that open towards the AI agent immediately. This was not surprising to us. Workers had to get familiar with it first, which also required the ability to adapt to new situations: ‘[…] in the beginning there is somehow a rejecting attitude, because the workers still have to get familiar with everything.’

E.3 Taking one’s own initiative for improvement

Some employees reported that in the case where the AI agent’s decisions seem to be unreasonable, they self-initiated reasoning processes about possible causes: ‘Then I always ask myself: “Why is it like this now? Why is it rated so badly or so well or why are the two identical springs rated so differently?”’ These ideas are then shared with their supervisor, which helps to improve the AI agent’s capabilities: ‘If he notices something that is not plausible to him, then bang, he is upstairs with his supervisor and tells him/her how to do that in a different way.’ This approach can be very important for a targeted improvement or enhancement of the AI agent’s capabilities.

E.4 Asserting one’s own recovery phases

When we asked about the specific capabilities a perfect partner of the AI agent should have, we got the answer that it would be good if he or she could work without taking recovery phases: ‘[…] he doesn’t take coffee breaks, maybe he doesn’t have to go to the toilet, I don’t know [laughs]. So first of all, that would be the best employee.’ However, humans should be highly aware of their own physical and psychological limitations and have to be able to interpret signs of exhaustion correctly in order to insist on recovery phases and detachment from work (Sonnentag et al. Citation2010). Thus, we interpret this in such a way that it is crucial to assert one’s own recovery phases.

E.5 Feeling confident to work with new and unfamiliar technologies

Our interviews also revealed that people who have had little previous experience with digital technologies can find it particularly difficult to engage with modern, unfamiliar technologies: ‘For the older ones, yes, it’s hard to grasp. There is, I don’t want to say a dismissive attitude, because they still have to trust each other with everything anyway, but there is a bit of a reserved attitude.’ Thus, an important prerequisite for a sustainable interaction with AI agents is to open up one’s mind to new technologies and feel confident in working with those.

Social dimension

By social, we refer to patterns that reflect the ability to understand other people and manage relationships with them (Boyatzis et al. Citation2019). This dimension includes, for example, interpreting others’ signals carefully and seeking to understand others’ points of view, as well as resolving conflicts and achieving fruitful collaborations. We identified four behavioural patterns in our data which can be grouped into a social dimension in a broader sense. This assumes that the AI agent with which the workers interviewed are interacting is considered more as some kind of a (new) social actor, such as a partner or colleague (Waefler and Schmid Citation2021).

S.1 Being patient with the new and inexperienced colleague

As has already been indicated above, the AI agent is trained with the help of human actors’ support and is developing over time. The employees interviewed pointed out that the AI agent was rather inexperienced at the beginning and it can take rather a long time to benefit from training and development efforts: ‘The AI had to be trained first, until we really worked with it, a lot of time had passed.’ However, it is important not to give up immediately if something does not work or takes longer, but to keep at it and remain patient: ‘Then we definitely need to work on it again and follow up on it.’

S.2 Cultivating an intuition for the AI agent’s peculiarities

The workers also reported on how they developed an intuition rather iteratively for the AI agent’s peculiarities. They reported, for example, on how they found out the machine has to be supplied with springs at a particular rate: ‘The supervisor said that spring would be processed by the AI within three seconds, yes, but we have found out that if we really work in this cycle, then the machine does not work exactly. We should always wait with the next spring until the first one has at least disappeared into the machine.’ We argue that this quote also relates to the iterative learning processes humans have already gone through during their interaction with the AI agent.

S.3 Appreciating the AI agent’s achievements

The workers next reported on knowing exactly whether they can rely on the AI agent’s output or not: ‘What it can do, it can do quite reliably, I have to say.’ We argue that they recognise and appreciate the AI agent’s achievements: ‘What the AI can do, for sure, is the waste box, where the springs are 100% waste, it can do that very well.’

S.4 Developing a sort of sensitivity and care towards the AI agent

Finally, it is also noticeable that the workers developed a kind of sensitivity towards the AI agent. Thus, they reported on how important it is to not damage sensitive components of the machine: ‘Because the AI is sensitive; because if you somehow throw springs in there, they could damage the cameras,’ and that it is crucial to handle the machine with care: ‘In any case, you must be able to handle the machine carefully.’

Discussion

We conducted semi-structured interviews among a group of shop floor workers in a remanufacturing context in order to explore in more detail how the behavioural patterns referring to the interaction of those workers with one specific AI agent can be further described and systemised. In order to analyse and interpret our empirical findings in an iterative manner, we referred to related research in the field of AI implementation in remanufacturing and organisation, the increasing relevance of human-AI interaction and its novel dynamics, and the concept of human actor’s competence considered as a potential for specific behaviour at work. The latter provided us with a broad but very fruitful systematisation of cognitive, emotional and social competence and its specific behavioural representations which we identified in the data of our interviews. These empirical results also provide additional support for an earlier conceptualisation of the construct AI competence which is discussed as a critical antecedent for constructive human-AI interaction (Süße et al., Citation2021). We argue that our framework of human behaviour in human-AI interaction introduced by can be discussed in the light of three main perspectives deduced from related research.

Table 1. Summary of humans’ behavioural patterns in the interaction with the AI agent.

The first one is the actual paradigm of human-AI interaction that rather dominates in our specific case study. The second is about the workers’ description of the AI agent in this specific case. The third refers to the relationship of the identified behavioural patterns of our explorative framework to the state-of-the-art about novel human tasks emerging in human-AI interaction. We find this reflection about our findings particularly fruitful as it may provide a further structure of our explorative approach for future research and practice, can sharpen the understanding of existing conceptualisations and put our behavioural framework in the current research context.

Regarding the perspective of the human-AI interaction paradigm, we argue that our case shows patterns of an interaction as commentary paradigm, as there is, firstly, an input from the worker providing the springs to the AI agent, secondly, support from the AI agent by providing a decision recommendation and, thirdly, recognition and evaluation of that recommendation by the worker. This process shows that the AI-based suggestions are continuously integrated into the worker’s ongoing task (also see ) and it usually requires a human’s ‘comments’ after decision suggestions have been made by the AI (van Berkel et al. Citation2021, Citation2021). The behavioural patterns ‘C.4 Dealing with the AI agent’s decisions in a reflective manner’ or ‘S.3. Appreciating the AI agent’s achievements’ strengthen this picture of a continuous integration of the AI agent into the worker’s job task. Furthermore, we also gave example quotes in our findings section about the first initial step of this process. This reveals that in our case, the worker has the final word regarding the results of the joint decisions. This can also be interpreted as a hierarchical gap between both.

Regarding the AI agent’s role, we argue that the interaction paradigm and the workers’ understanding of the AI agent’s role mentioned above might be consistent. As we mentioned above, various role descriptions of AI agents have already been introduced in literature. Papachristos et al. (Citation2021) introduced the roles of mirror, assistant, guide and oracle. We argue that the AI agent’s role particularly as a guide that is providing suggestions and a confidence score for humans’ evaluations of these suggestions can be related to our specific case. As Papachristos et al. (Citation2021) argue, the suggestions of a guide are regarded as helpful, but are critically evaluated by the worker. This is mirrored by our behavioural framework in some patterns, such as ‘C.3. Developing a basic understanding on how the AI agent learns and improves,’ ‘C.4. Dealing with the AI agent’s decisions in a reflective manner,’ ‘E.1. Considering the AI agent as a helpful counterpart’ or ‘S.3. Appreciating the AI agent’s achievements.’ However, the aspect of the learning ability and improvement of the AI agent and an acceptance of failure seems to be more explicit in our specific case, for example, when looking at the social dimension of our behavioural framework. Thus, from our point of view, the role description of a guide for the AI agent seems to be too unidirectional for our case. Regarding that aspect, we find the contribution by Hinsen et al. (Citation2022), who introduce the role of a colleague for specific AI agents, more appropriate. We argue that especially the social dimension of our framework highlights facets of a colleague, for example, the pattern ‘S.1. Being patient with the new and inexperienced colleague,’ which is complementary to the pattern ‘E.1. Considering the AI agent as a helpful counterpart.’ However, it has been emphasised by the workers that the AI agent in our case study is a rather young colleague who still has a lot to learn. However, from a worker’s point of view, it is definitely a good and valuable investment in the future.

We refer to research from the field of human-AI joint decision-making in order to discuss our results from the third perspective of newly emerging tasks among humans interacting with AI agents. Waefler and Schmid (Citation2021) introduced four novel tasks required from human actors, which can be summarised as (A) verifying an AI’s decision suggestions, (B) improving the AI system, (C) learning from the AI and (D) taking responsibility for the joint decisions finally taken. Based on our framework of behavioural patterns revealed from the interview data, we argue that our findings show strong relationships to the tasks dimensions A, B and D and less support for dimension C. More precisely, we argue that our cognitive dimension consists of behavioural patterns representing the fulfilment of task dimension A and partially D, while our framework’s social dimension has a strong connection to task dimension B and also partially D. That we cannot find evidence in our data for the task dimension C does not mean that it is not generally relevant, but it does not seem to be that prominent in our case as the AI agent is instead seen as a young colleague, as discussed above. This can mean that currently knowledge and experience is going more from the human actors towards the AI agent, but not vice versa. Thus, this interpretation is in line with our argumentation about the interaction paradigm and the assumed role of the AI agent.

So far, we can conclude that our empirical findings support the recent conceptualisations of the state-of-the-art literature in the field of human-AI interaction in organisation considering our specific context of remanufacturing, where human-AI interaction will play an increasingly relevant role in the future. Furthermore, we contribute to research in the field of AI implementation in organisation as we provide profound insights into employees’ actual behaviour and, as such, into the competencies demanded of workers of a remanufacturing company that has managed to implement an AI agent successfully for about two years. To date, research has particularly considered employees’ technical competences as critical for AI implementation (Hamm and Klesel Citation2021; Pumplun et al. Citation2019). We hope to broaden that perspective with our results in order to inspire future research and provide first guidance for practitioners preparing their employees for AI. Especially regarding the field of a CE, we contribute to more knowledge about the specific competencies required of workers when manual processes are supported by an AI agent in order to make remanufacturing more successful in the long run. Thus, we contribute to the CE literature and practice by providing insights into the microlevel of CE, the individual firm and its processes.

In summary, we conclude that our findings and their interpretation contribute to the further understanding of human-AI systems that are emerging faster than ever in business and nonbusiness contexts today. As we have a stronger focus on the human actor in that dynamic system, we recognise that the human perceives him/herself as more flexible compared to AI when it comes to new and ambiguous situations, but really appreciates joint decision-making with AI agents. Thus, we conclude that humans perceive the new smart technology more as an enhancement and less as a potentially dangerous tool that needs to be regulated. Of course, this does not make the discussion about the potential dangers of unregulated AI development superfluous, but it also shows that AI can make a valuable contribution as long as this tool is used appropriately. Furthermore, it is also mentionable that a better or wider understanding on how people work with and perceive AI agents may contribute to more people getting access to such tools. We see particularly in our cognitive dimension that employees’ ongoing learning processes are becoming even more important for the development of new knowledge in human-AI systems. This is in line with recent assumptions that AI will potentially not replace as many jobs as formerly expected, but will more probably contribute to the transformation of jobs and competence profiles of people in modern organisations (see e.g. Markauskaite et al. Citation2022). Furthermore, with our findings we contribute to a wider understanding on how humans perceive the new smart technology more as an enhancement and less as a potentially dangerous tool

Limitations

Since the participants of the study were all recruited from only one company and the database is rather small, consisting of only a few male participants, the opinions and experiences may not be fully representative for a larger population of employees working with AI agents, especially since a female point of view is missing so far. The fact that interpretations are limited due to personal experience and knowledge, which is hard to avoid in qualitative studies, also leads to the fact that our findings cannot be fully generalised but certainly provide some relevant insights into the human part in human-AI systems related to a specific case which we describe in detail throughout our manuscript. We think that more comprehensive work is needed on how humans and AI can work together in the future and thus our work can be seen as a starting point. This work can be seen as an important first step towards more extensive investigations in the context of human-AI interaction. Of course, further empirical studies from other fields of practice also including more heterogeneous groups of study participants are required in the future and are already planned for by the authors.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are available on request from the corresponding author, [MK]. The data are not publicly available due to the fact that they are containing information that could compromise the privacy of research participants.

Additional information

Notes on contributors

Thomas Süße

Thomas Süße is a Professor for human resources management and organization at Bielefeld University of Applied Sciences and Arts. His research interests are among others leadership in the digital era, digital competence of the work force as well as human-AI cooperation and collaboration in professional work contexts.

Maria Kobert

Maria Kobert is a post-doctoral researcher at the Bielefeld University of Applied Sciences and Arts with the focus on leadership approaches and digital competencies for the digital era as well as statistical methods for social sciences.

Caroline Kries

Caroline Kries is currently working on her PhD thesis, at the Technical University Dortmund, which is focused on statistical methods for social sciences.

References

  • Acemoglu, D. 2021. Harms of AI (No. W29247). Cambridge, MA: National Bureau of Economic Research. https://doi.org/10.3386/w29247.
  • Agrawal, R., V. A. Wankhede, A. Kumar, S. Luthra, A. Majumdar, and Y. Kazancoglu. 2021. “An Exploratory State-Of-The-Art Review of Artificial Intelligence Applications in Circular Economy Using Structural Topic Modeling.” Operations Management Research 1–18.
  • Ahlborn, K., G. Bachmann, F. Biegel, J. Bienert, S. Falk, A. Fay, T. Gamer, K. Garrels, J. Grotepass, and A. Heindl. 2019. “Technologieszenario Künstliche Intelligenz in der Industrie 4.0.” Bundesministerium für Wirtschaft und Energie (BMWi).
  • Anton, E., A. Behne, and F. Teuteberg. 2020. The Human Behind Artificial Intelligence - an Operationalisation of AI Competencies, In Proceedings of the 28th European Conference on Information Systems (ECIS). Presented at the European Conference on Information Systems (ECIS), pp. 19–36. https://aisel.aisnet.org/ecis2020_rp/141
  • Bassellier, G., B. H. Reich, and I. Benbasat. 2001. “Information Technology Competence of Business Managers: A Definition and Research Model.” Journal of Management Information Systems 17 (4): 159–182. https://doi.org/10.1080/07421222.2001.11045660.
  • Bittner, E., S. Oeste-Reiß, and J. M. Leimeister. 2019. Where is the Bot in Our Team? Toward a Taxonomy of Design Option Combinations for Conversational Agents in Collaborative Work, In Proceedings of the 52nd Hawaii International Conference on System Sciences, Hawaii, USA.
  • Blömeke, S., J. Rickert, M. Mennenga, S. Thiede, T. S. Spengler, and C. Herrmann. 2020. “Recycling 4.0 – Mapping Smart Manufacturing Solutions to Remanufacturing and Recycling Operations.” Procedia CIRP 90:600–605. https://doi.org/10.1016/j.procir.2020.02.045.
  • Borenstein, J., and A. Howard. 2021. “Emerging Challenges in AI and the Need for AI Ethics Education.” AI and Ethics 1 (1): 61–65. https://doi.org/10.1007/s43681-020-00002-7.
  • Boyatzis, R. E., and R. Boyatzis. 2008. “Competencies in the 21st Century.” Journal of Management Development 27 (1): 5–12. https://doi.org/10.1108/02621710810840730.
  • Boyatzis, R. E., D. Goleman, F. Gerli, S. Bonesso, and L. Cortellazzo. 2019. “Emotional and Social Intelligence Competencies and the Intentional Change Process.” In Cognitive Readiness in Project Teams, edited by C. Belack, D. Di Filippo, and I. Di Filippo, 147–169. New York: Productivity Press. https://doi.org/10.4324/9780429490057-7.
  • Brock, J. K. U., and F. von Wangenheim. 2019. “Demystifying AI: What Digital Transformation Leaders Can Teach You About Realistic Artificial Intelligence.” California Management Review 61 (4): 110–134. https://doi.org/10.1177/1536504219865226.
  • Brynjolfsson, E., and T. Mitchell. 2017. “What Can Machine Learning Do? Workforce Implications.” Science 358 (6370): 1530–1534. https://doi.org/10.1126/science.aap8062.
  • Buiten, M. C. 2019. “Towards Intelligent Regulation of Artificial Intelligence.” European Journal of Risk Regulation 10 (1): 41–59. https://doi.org/10.1017/err.2019.8.
  • Dellermann, D., A. Calma, N. Lipusch, T. Weber, S. Weigel, and P. Ebel. 2019. The Future of Human-AI Collaboration: A Taxonomy of Design Knowledge for Hybrid Intelligence Systems. Presented at the Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2019.034.
  • Ferràs-Hernández, X. 2018. “The Future of Management in a World of Electronic Brains.” Journal of Management Inquiry 27 (2): 260–263. https://doi.org/10.1177/1056492617724973.
  • Giering, O. 2021. Künstliche Intelligenz und Arbeit: Betrachtungen zwischen Prognose und betrieblicher Realität. Z. Arb. Wiss. https://doi.org/10.1007/s41449-021-00289-0.
  • Glikson, E., and A. W. Woolley. 2020. “Human Trust in Artificial Intelligence: Review of Empirical Research.” Academy of Management Annals 14 (2): 627–660. https://doi.org/10.5465/annals.2018.0057.
  • Grafström, J., and S. Aasma. 2021. “Breaking Circular Economy Barriers.” Journal of Cleaner Production 292:126002. https://doi.org/10.1016/j.jclepro.2021.126002.
  • Hamm, P., and M. Klesel. 2021. “Success Factors for the Adoption of Artificial Intelligence in Organizations: A Literature Review.“ 27th Americas Conference on Information Systems (AMCIS), Montreal, Canada, August 2021.
  • Hinsen, S., P. Hofmann, J. Jöhnk, and N. Urbach. 2022. How Can Organizations Design Purposeful Human-AI Interactions: A Practical Perspective from Existing Use Cases and Interviews. Presented at the Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2022.024.
  • Hsieh, H. F., and S. E. Shannon. 2005. “Three Approaches to Qualitative Content Analysis.” Qualitative Health Research 15 (9): 1277–1288. https://doi.org/10.1177/1049732305276687.
  • Kerin, M., and D. T. Pham. 2019. “A Review of Emerging Industry 4.0 Technologies in Remanufacturing.” Journal of Cleaner Production 237:117805. https://doi.org/10.1016/j.jclepro.2019.117805.
  • Li, J., M. Barwood, and S. Rahimifard. 2018. “Robotic Disassembly for Increased Recovery of Strategically Important Materials from Electrical Vehicles.” Robotics and Computer-Integrated Manufacturing 50:203–212. https://doi.org/10.1016/j.rcim.2017.09.013.
  • Li, S., H. Zhang, W. Yan, Z. Jiang, H. Wang, and W. Wei. 2019. “Multi-Objective Disassembly Sequence Optimization Aiming at Quality Uncertainty of End-Of-Life Product.” IOP Conference Series: Materials Science & Engineering 631 (3): 032015. https://doi.org/10.1088/1757-899X/631/3/032015.
  • Margetts, H. 2022. “Rethinking AI for Good Governance.” Daedalus 151 (2): 360–371. https://doi.org/10.1162/daed_a_01922.
  • Markauskaite, L., R. Marrone, O. Poquet, S. Knight, R. Martinez-Maldonado, S. Howard, J. Tondeur, et al. 2022. “Rethinking the Entwinement Between Artificial Intelligence and Human Learning: What Capabilities Do Learners Need for a World with AI?” Computers and Education: Artificial Intelligence 3:100056. https://doi.org/10.1016/j.caeai.2022.100056.
  • Masi, D., V. Kumar, J. A. Garza-Reyes, and J. Godsell. 2018. “Towards a More Circular Economy: Exploring the Awareness, Practices, and Barriers from a Focal Firm Perspective.” Production Planning & Control 29 (6): 539–550. https://doi.org/10.1080/09537287.2018.1449246.
  • Massmann, C., and A. Hofstetter. 2020. “AI-pocalypse now? Herausforderungen Künstlicher Intelligenz für Bildungssystem.” Unternehmen und die Workforce der Zukunft Digitale Bildung und Künstliche Intelligenz in Deutschland 167–220. https://doi.org/10.1007/978-3-658-30525-3_8.
  • Miles, M. B., A. M. Huberman, and J. Saldana. 2020. Qualitative Data Analysis: A Methods Sourcebook. 4th ed. Arizona State University, USA: SAGE Publications, Thousand Oaks.
  • Papachristos, E., P. Skov Johansen, R. Møberg Jacobsen, L. Bjørn Leer Bysted, and M. B. Skov. 2021. How Do People Perceive the Role of AI in Human-AI Collaboration to Solve Everyday Tasks? In Presented at the CHI Greece 2021: 1st International Conference of the ACM Greek SIGCHI Chapter, ACM, Online (Athens, Greece) Greece, 1–6. https://doi.org/10.1145/3489410.3489420.
  • Pumplun, L., C. Tauchert, and M. Heidt. 2019. “A New Organizational Chassis for Artificial Intelligence - Exploring Organizational Readiness Factors.“ European Conference on Information Systems (ECIS), Stockholm, Sweden, June 2019.
  • Ramadoss, T. S., H. Alam, and R. Seeram. 2018. “Artificial Intelligence and Internet of Things Enabled Circular Economy.“ The International Journal of Engineering and Science (IJES) 7 (9): 55–63.
  • Rammer, C., I. Bertschek, B. Schuck, V. Demary, and H. Goecke. 2020. Einsatz von Künstlicher Intelligenz in der Deutschen Wirtschaft: Stand der KI-Nutzung im Jahr 2019 (Research Report). ZEW-Gutachten und Forschungsberichte.
  • Rzepka, C., and B. Berger. 2018. “User Interaction with AI-Enabled Systems: A Systematic Review of IS Research.“ International Conference on Information Systems (ICIS), San Francisco, California, December 2018.
  • Salmenperä, H., K. Pitkänen, P. Kautto, and L. Saikku. 2021. “Critical Factors for Enhancing the Circular Economy in Waste Management.” Journal of Cleaner Production 280:124339. https://doi.org/10.1016/j.jclepro.2020.124339.
  • Samuel, J. 2021. A Call for Proactive Policies for Informatics and Artificial Intelligence Technologies [WWW Document]. Scholars Strategy Network. Accessed April 25, 2023. https://scholars.org/contribution/call-proactive-policies-informatics-and
  • Sankaran, K. 2019. “Carbon Emission and Plastic Pollution: How Circular Economy, Blockchain, and Artificial Intelligence Support Energy Transition?” Journal of Innovation Management 7 (4): 7–13. https://doi.org/10.24840/2183-0606_007.004_0002.
  • Schelble, B., C. Flathmann, L.-B. Canonico, and N. Mcneese. 2021. Understanding Human-AI Cooperation Through Game-Theory and Reinforcement Learning Models. Presented at the Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2021.041.
  • Schlüter, M., H. Lickert, K. Schweitzer, P. Bilge, C. Briese, F. Dietrich, and J. Krüger. 2021. “AI-Enhanced Identification, Inspection and Sorting for Reverse Logistics in Remanufacturing.” Procedia CIRP 98:300–305. https://doi.org/10.1016/j.procir.2021.01.107.
  • Seeber, I., E. Bittner, R. O. Briggs, G.-J. de Vreede, T. de Vreede, D. Druckenmiller, A. B. Merz, et al. 2018. Machines as Teammates: A Collaboration Research Agenda. https://doi.org/10.24251/HICSS.2018.055.
  • Seiffer, A., U. Gnewuch, and A. Maedche. 2021. “Understanding Employee Responses to Software Robots: A Systematic Literature Review.“ Conference: International Conference on Information Systems (ICIS), Austin, Texas, December 2021.
  • Sonnentag, S., I. Kuttler, and C. Fritz. 2010. “Job Stressors, Emotional Exhaustion, and Need for Recovery: A Multi-Source Study on the Benefits of Psychological Detachment.” Journal of Vocational Behavior 76 (3): 355–365. https://doi.org/10.1016/j.jvb.2009.06.005.
  • Süße, T., M. Kobert, and C. Kries. 2021. “Antecedents of constructive human-AI collaboration: An exploration of human actors’ key competencies.” In Smart and Sustainable Collaborative Networks 4.0, edited by L. M. Camarinha-Matos, X. Bouchier, and H. Afsarmanesh. Springer.
  • Talboy, A. N., and E. Fuller. 2023. “Challenging the Appearance of Machine Intelligence: Cognitive Bias in LLMs.” https://doi.org/10.48550/arXiv.2304.01358.
  • Tsimba, W., G. Chirinda, and S. Matope. 2021. “Machine Learning for Decision-Making in the Remanufacturing of Worn-Out Gears and Bearings.” SAJIE 32 (2): 135–150. https://doi.org/10.7166/32-3-2636.
  • van Berkel, N., O. F. Ahmad, D. Stoyanov, L. Lovat, and A. Blandford. 2021. “Designing Visual Markers for Continuous Artificial Intelligence Support: A Colonoscopy Case Study.” ACM Transactions on Computing for Healthcare 2 (1): 1–7:24. https://doi.org/10.1145/3422156.
  • van Berkel, N., M. B. Skov, and J. Kjeldskov. 2021. “Human-AI Interaction: Intermittent, Continuous, or Proactive.” Interactions 28 (6): 67–71. https://doi.org/10.1145/3486941.
  • Waefler, T., and U. Schmid. 2021. “Explainability is Not Enough: Requirements for Human-AI- Partnership in Complex Socio-Technical Systems. https://doi.org/10.34190/EAIR.20.007.
  • Wang, D., J. D. Weisz, M. Muller, P. Ram, W. Geyer, C. Dugan, Y. Tausczik, H. Samulowitz, and A. Gray. 2019. “Human-AI Collaboration in Data Science: Exploring Data Scientists.” Perceptions of Automated AI Proceedings of the ACM on human-computer interaction 3 (CSCW): 1–24. https://doi.org/10.1145/3359313.
  • Wanner, J., L.-V. Herm, and C. Janiesch. 2019. “Countering the Fear of Black-Boxed AI in Maintenance: Towards a Smart Colleague.“ 2019 Pre-ICIS SIGDSA Symposium.
  • Wilson, H. J., and P. R. Daugherty. 2018. “Collaborative Intelligence: Humans and AI are Joining Forces.” Harvard Business Review 96 (4): 114–123.
  • Yeh, S. C., A. W. Wu, H. C. Yu, H. C. Wu, Y.-P. Kuo, and P.-X. Chen. 2021. “Public Perception of Artificial Intelligence and Its Connections to the Sustainable Development Goals.” Sustainability 13 (16): 9165. https://doi.org/10.3390/su13169165.
  • Yuan, Z., J. Bi, and Y. Moriguichi. 2008. “The Circular Economy: A New Development Strategy in China.” Journal of Industrial Ecology 10 (1–2): 4–8. https://doi.org/10.1162/108819806775545321.
  • Yudkowsky, E. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks. Oxford University Press. https://doi.org/10.1093/oso/9780198570509.003.0021.
  • Zicari, R. V., J. Brodersen, J. Brusseau, B. Dudder, T. Eichhorn, T. Ivanov, G. Kararigas, et al. 2021. Z-Inspection®: A Process to Assess Trustworthy AI. IEEE Transactions on Technology and Society. 1–1. https://doi.org/10.1109/TTS.2021.3066209.
  • Zweig, K. A. 2019. Algorithmische Entscheidungen: Transparenz und Kontrolle. Berlin: Konrad Adenauer Stiftung.