102
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The robot saw it coming: physical human interference, deservingness, and self-efficacy in service robot failures

ORCID Icon & ORCID Icon
Received 02 May 2023, Accepted 29 Apr 2024, Published online: 06 May 2024

ABSTRACT

Robotic services’ popularity continues to increase due to technological advancements, labour shortages, and global crises. Yet, while providing these services, robots are subject to occasional physical interruption by humans to them, thus restricting their functioning and, at times, leading to failure. To investigate this issue, the present study examined the role of third-party human interference in service robot failures and its effects on the observers’ attitudes towards and willingness to engage with the robot. We manipulated human interference resulting in different robotic service failures in two online scenario-based experiments. The results revealed that individuals held less favourable attitudes towards a failed service robot without (vs. with) physical human interference, and they were less willing to engage with the failed service robot without (vs. with) physical human interference. The perceived deservingness of the robot accounted for this effect, moderated by the person’s self-efficacy regarding robots. The results are discussed with their implications for not only the theory of service failures and human-service robot interactions but also for robotic service providers.

Summary statement of contribution

We contribute to extant research on how humans evaluate and perceive service robot failures that are very likely to occur given the quick and widespread adoption of these technologies. Human-robot interactions (HRIs) need to maintain safety, ensure a positive user experience, and guarantee long-term use, considering robots’ as well as humans’ behaviours. To that end, our research uniquely demonstrates how third-party human interference with robotic services resulting in service failure affect observer customers’ attitudes and behavioural intentions, as determined by their perceptions regarding how much the robot deserves the failure. Moreover, we delineate how the observer customers’ self-efficacy regarding robots causes differences in the demonstrated effects.

1. Introduction

Service robots (SRs) are autonomous and adaptable interfaces that interact and communicate with, and deliver service to customers (Wirtz et al. Citation2018). SRs provide a wide range of services (Ozturkcan and Merdin-Uygur Citation2022), from elderly care (Kalogianni Citation2015) to working in cafés and restaurants (Frey and Osborne Citation2017) or providing sexual pleasure (Krumins Citation2015). However, robots can be physically interrupted by humans while performing these services, which can limit their functionality and sometimes result in failure. This study investigates a novel and understudied topic: robotic service failure due to physical human interference.

Consider the situation of queuing at a busy airport, with an SR helping customers and guiding them to their assigned gates. The SR’s task is to assist the customers in finding their gates, checking their boarding passes, and answering their questions. The SR’s effectiveness and viability depend on its ability to move and deliver services smoothly and efficiently, without being obstructed or attacked by other humans or objects. However, the SR faces some challenges and difficulties in performing its task. Sometimes, the SR cannot move and deliver services effectively, because it is blocked by other humans or objects in its way. This causes the customers in the queue to be delayed and dissatisfied, as they cannot reach their gates on time. Other times, the SR is subjected to physical aggression from some angry or hostile customers, who push, kick, or hit the SR. This damages the SR’s body and sensors and affects its ability to function properly. These events have an impact on the perceptions and attitudes of the observer customers, who witness the SR’s performance and interactions.

Humans assaulting robots on service duties is quite common. Examples include two students vandalising a meal delivery SR (Smith Citation2022), security robots being hit in order to stop them functioning in a variety of states in the USA, and Russian teacher robot Alantim being hit by a baseball bat (Bromwich Citation2019). While the robotic services literature has started to pay attention to researching mistreated robots (Harris and Anthis Citation2021; Küster and Swiderska Citation2021; Kwak et al. Citation2013; Pütten et al. Citation2013; Riek et al. Citation2009; Suzuki et al. Citation2015; Ward, Olsen, and Wegner Citation2013), an intriguing question is whether and how the attitudes of the serviced customers are affected when other humans interfere with on-duty SRs (Letheren, Russell-Bennett, and Whittaker Citation2020). The extent to which physical human interference changes the perception of service failures is substantial, influencing not only the immediate users but also observers who may extrapolate the incident to broader concerns about the integration of robots into service settings.

However, despite common occurrences, the literature offers little empirical evidence on how physical human interference affects consumer perceptions of SRs. Therefore, this study fills the gap in the literature by addressing how consumers evaluate physical human interference with robots on service duty, resulting in service failures. Moreover, we investigate which mechanisms and boundary conditions explain the relationship between consumer evaluations and physical human interference with SRs resulting in service failures.

Two online scenario-based experiments demonstrate the effect of interference from a human on attitudes and willingness to engage with the SR (Studies 1 and 2), with the mediating mechanism as perceived deservingness and the moderating mechanism as a person’s self-efficacy regarding robots in general (Study 2). In the remainder of the article, we first provide the theoretical background focusing on the relevant literature on robotic service failures, physical human interference, deservingness, and self-efficacy regarding technologies. Next, we propose related hypotheses, followed by the methodology. The findings bring forth compelling insights for robotic service providers in terms of managing human-robot relationships, harm, or other physical interferences. We further discuss theoretical implications, limitations, and further research avenues.

2. Theory and hypotheses

2.1. Robotic service failures

A robotic service failure occurs when a robot fails to deliver the service outcome that the user expects or desires (Smith, Bolton, and Wagner Citation1999) due to any form of actual or perceived misfortunes, errors, or problems (Liu et al. Citation2023). Technological transformation is changing the way robotic services are delivered but also creating new sources of failure (Huang and Dootson Citation2022). Yet robotic service failures have received less attention than other types of technology-based service failures, and empirical investigations of evaluations of different robotic service failures are still limited (Lteif and Valenzuela Citation2022) compared to their practical significance ().

Table 1. A selection of robot failures depicted in the media.

A comprehensive human-robot failure taxonomy classifies failures according to their sources, such as technical vs. interaction failures, software vs. hardware failures, social norm violations, human errors, and environmental factors (Honig and Oron-Gilad Citation2018). Current service robots, in practice, still suffer from unintended failures (Liu et al. Citation2023), as they largely fail due to situational aspects (Murray Citation2022). Hence, there is a significant call for research regarding the factors relating to the contextual nature of robotic service failure (Lteif et al. Citation2023) (Harrison-Walker Citation2012; Van Vaerenbergh et al. Citation2014). Contextual factors include human interference as well as environmental conditions (Meyer et al. Citation2023).

Whitby (Citation2008) warned that human mistreatment of robots is likely to be common, especially with more anthropomorphic and intimate interfaces. Human interference in robotic service failures is a major concern for researchers and practitioners, considering the high costs of adopting and maintaining robotic services (Ivanov et al. Citation2022) and the potential risks of interference, sabotage, and violence towards SRs. Social facilitation theory suggests that another social being’s presence can cause anxiety and impair performance (i.e. service failure). For example, Koban, Haggadone, and Banks (Citation2021) demonstrated how a social robot’s co-presence affected human worker performance in their ‘Observant Android’ article, while Ward, Olsen, and Wegner (Citation2013) reported that observing harm to a robot increased the robot’s perceived mind in their ‘harm-made mind’ studies. However, these studies ignored the subsequent attitudes or engagement intentions towards the robot in physical interference situations. Other research has compared robotic and human physical interference (Saleh et al. Citation2023). For example, Swiderska and Küster (Citation2020) examined how a robotic service agent’s intentional harm to a customer influenced the customer’s perception of the SR. Tanibe, Hashimoto, and Karasawa (Citation2017) investigated how interfering with an SR’s task affected observers’ attributions, but the interference aimed to assist the SR rather than hindering it. The impact of human interference on the robot’s performance and failure is underexplored. Therefore, we study a context where the SR is vulnerable to physical third-party human interference, unlike most studies, which assume the customer is the vulnerable party.

2.2. The role of physical human interference in robotic failures

We especially focus on the role of physical interference, which is one of the most typical moral violations (Gray and Wegner Citation2012). There are various degrees of physical human interference ranging from mistakes to deliberate violations and social norm inappropriateness (Honig and Oron-Gilad Citation2018). While many previous taxonomies of the human role in technological failure have analysed skills, knowledge (Rasmussen Citation1982), mistakes, or lapses (Reason Citation1990), we focus on an overlooked yet prominent third-party human role in robotic failures: deliberate physical interference. Deliberate interference by humans had previously been conceptualised as intentional illegitimate violations (e.g. directing a robot to run into a wall and sabotage) (Honig and Oron-Gilad Citation2018; Reason Citation1990). Intentional harm is morally wrong compared to unintended interferences, such as accidents (Nichols Citation2005).

Originally, observing other people, e.g. watching them succeed or fail, served as a proxy for formulating individuals’ judgement of the observed stimuli (Bandura Citation1977). Within service-oriented contexts, customers have the opportunity to witness interactions between other entities, such as service providers, receivers, or fellow customers. On occasion, a customer may find themselves as part of a larger group who are receiving service simultaneously (Webb and Peck Citation2014) and thus provide judgements and form their attitudes as a third-party observer of another social interaction that is taking place. Yet, not many studies investigate judgements or behavioural intentions resulting from observing a third-party, such as a harmer or a victim demanding help (Park et al. Citation2023).

In , we set the scene for the context of service failure in this research.

Figure 1. Setting the scene of the research context.

Figure 1. Setting the scene of the research context.

Previous works have raised some future research questions regarding third-party observers’ attitudes, such as what would happen if a customer waiting to receive service observed the customer ahead of them receive a physical interference from the service provider or another customer, and whether their subsequent evaluations would differ significantly (Saleh et al. Citation2023). Barfield (Citation2023) conceptualised ‘witness harm,’ emphasising the potential negative consequences that could manifest if a third party witnessed robot mistreatment by other humans. Ward, Olsen, and Wegner (Citation2013) also found it worth investigating the perspective of the outside observers of an interaction involving physical interference, unlike examining harm-doers (humans) or victims (SRs), whose self-interest or self-justification might bias their judgements.

Customers tend to react more negatively to failures of advanced service technologies (i.e. robots) compared to failures involving or caused by human employees; they exhibit speciesism and greater frustration in response to errors made by machines (vs. humans) (Chen et al. Citation2021; Choi, Mattila, and Bolton Citation2021). Kim and Hinds (Citation2006) demonstrated that users tended to blame a robot in instances of malfunction when the robot exhibited a higher degree of autonomy from humans. Indeed, humans have relatively low empathy for robots even in difficult situations, such as failing service robots that do not rely on external and environmental cues (Liu et al. Citation2023). We operationalise ‘external environmental interference in robotic service’ as physical human interference in our research.

Transferring these insights to the context of robotic service failure, we postulate that customers will perceive and evaluate a service robot more negatively compared to a service robot failing due to a human physically interfering with it.

Hypothesis 1: Attitudes towards the service robot are more negative if the service robot fails without interference (vs. due to physical interference from a human).

Spatola et al. (Citation2023) emphasised the pivotal role individual attitudes towards robots will play in predicting humans’ behaviour towards and acceptance of these novel artificial agents. We expected to see a main effect with the same direction on another key desirable outcome, namely, willingness to engage with the robot (following the service failure), since the previous literature intricately links attributions (regarding blame, failure, success, etc.) to the patronage intentions of observers of a service failure (Wan and Wyer Citation2019). This led to the second hypothesis:

Hypothesis 2: Individuals are less willing to engage with the service robot if the service robot fails without interference (vs. due to physical interference from a human).

2.3. The mediating role of SR’s deservingness

Physical interference from a human and the SR’s failure are both closely related to judgements of deservingness. Deservingness is defined as the perception that a target deserves a certain outcome or treatment – in observers’ judgements in the aftermath of the outcome (Palmeira, Koo, and Sung Citation2022). Individuals are fundamentally assumed to have a worldview that bad (good) things happen to bad (good) people (Callan, Kay, and Dawtry Citation2014). Deservingness has long been invoked as a mechanism to explain people’s responses to good or bad outcomes, deeming it a very appropriate mechanism for an after-failure judgement.

However, despite its practical significance, very few studies have empirically demonstrated effects by measuring deservingness directly (Callan, Kay, and Dawtry Citation2014; Wood et al. Citation2009). Moreover, little research has been focused on the deservingness judgements of non-human entities (i.e. robots). Yet, extant research on the anthropomorphism and human-likeness of SRs has established them as social entities in modern servicescapes. People view the semblance of sentience in non-human technological entities, like robots and AI, as if it were a form of ‘life’, and this can have negative (discomfort) or positive (trust) consequences (Becker, Mahr, and Odekerken-Schröder Citation2023; Marriott and Pitardi Citation2024). Among important social judgements, recently, the concept of fairness has started to gain attention among human-robot interaction (HRI) researchers to alleviate the dynamics of human-robot teams and group dynamics (Chang et al. Citation2021). We focus on judgements of deservingness (the extent to which the SR deserves to fail) as an explanatory mechanism.

Drawing on the just-world theory (Lerner, Miller, and Holmes Citation1976), people are motivated to comprehend an orderly and predictable world where both they and others get what they deserve and deserve what they get (Kay et al. Citation2008). When inanimate objects are harmed, individuals attribute to them an enhanced capacity to experience pain (Ward, Olsen, and Wegner Citation2013). On another note, even though an outcome is completely random and uncontrollable, judgers perceive it as a match for the deservingness of the object (Callan, Kay, and Dawtry Citation2014). Accordingly, a different type of failure – which is a negative outcome – is expected to lead to a difference in deservingness judgements as well.

Leo and Huh (Citation2020) demonstrated that when a service fails, people attribute less blame and responsibility to the robot (vs. humans) because they believe SRs have less control over the service outcomes. Furthermore, Callan, Kay, and Dawtry (Citation2014) demonstrated that mitigating circumstances play an important role in deservingness judgements. In servicescapes, there are many mitigating circumstances that affect human judgements of SRs. When there is physical interference (vs not) in an SR failure, the observers may construe this as a mitigating circumstance (vs. no mitigating circumstance). This, in turn, is expected to be reflected in their judgements of the SR as deserving to fail. Thus, we suggest that SRs that fail without an apparent reason (i.e. no human interference) would be perceived as more deserving of the negative outcome (the service failure) whereas an SR’s deservingness of failure would be lower if a human physically interfering with the robot were present. It is also expected that the more the SR is perceived as deserving of bad outcomes and failure, the less favourable observers’ attitudes towards the robot will be. Based on these insights, we hypothesise as follows:

Hypothesis 3a: Service robot deservingness mediates the relationship between service robot failure type (with or without human interference) and customers’ attitudes towards the service robot.

Hypothesis 3b: Service robot deservingness mediates the relationship between service robot failure type (with or without human interference) and customers’ willingness to engage with the service robot.

2.4. The moderating role of self-efficacy regarding robots

It is plausible that individuals’ determinations regarding the hierarchical standing of humans vis-à-vis robots could influence their assessments of deservingness. System justification theory (Jost, Banaji, and Nosek Citation2004) posits how marginalised groups ascribe their disadvantaged societal standing to perceived deficiencies in the inherent abilities and other attributes of their own collective as opposed to attributing it purely to discrimination or happenstance. In a comparable context, various customers perceive the treatment of robots differently depending on their positions (Gretzel and Murphy Citation2019; Siino and Hinds Citation2005). For example, the individual’s emotional state affects their satisfaction with a robotic service (Lajante, Remisch, and Dorofeev Citation2023).

Self-efficacy regarding new technologies is becoming one of the leading traits. With rapid advancements in technology shaping our daily lives, individuals’ belief in their ability to understand, operate, and adapt to these innovations is becoming increasingly critical. Drawing on social cognitive theory (Bandura Citation1997; Citation2001), individuals’ beliefs about their abilities to use and control robotic technologies influence their evaluation of robots and their interaction with those technologies in many ways (Pütten and Bock Citation2018). For example, lower self-efficacy is associated with a more negative general attitude towards robots stemming from a lack of belief in capabilities to deal with robots (Pütten and Bock Citation2018). Yet, the predictive characteristic of the scale demonstrates its explanatory value as a moderating variable in human-robotic technology interaction studies. For instance, self-efficacy attenuates the positive relationship between perceived ease of use and attitudes toward technological services (Dabholkar and Bagozzi Citation2002).

Self-efficacy has a positive influence on the intention to use and acceptance of AI and robotic technologies (Vu and Lim Citation2022) as well as determining whether their usage becomes a habit (Wang, Harris, and Patterson Citation2013). Before the introduction of robotic technologies, self-efficacy regarding previous technologies had been found to increase acceptance and performance. For example, back in the eighteenth century, in Great Britain, the ‘Luddites’ saw the machinery technology in their workplace as a potential threat to their careers and physically destroyed them. Either based on previous exposure and experiences or based on their visions for the future, individuals vary in terms of how efficient they perceive themselves to be regarding robotic technologies.

Self-efficacy is a highly domain-specific construct (Bandura Citation2006). For example, previous empirical research has focused on Internet self-efficacy (Marakas, Johnson, and Clay Citation2007), computer self-efficacy (Compeau and Higgins Citation1995), or gamers’ self-efficacy (Sharma et al. Citation2020; Shaw, LaRose, and Wirth Citation2006) amongst others. Self-efficacy regarding robotic technologies has been developed as a useful and unique construct, mostly due to the social presence, manipulation, and perception of abilities of robotic technologies compared to other technologies, such as phones or computers (Pütten and Bock Citation2018).

Self-efficacy is different from the actual efficacy, autonomy, power, or self-esteem of an individual; rather, it is a belief system (Pütten and Bock Citation2018). Bandura (Citation1997) stated that self-efficacy is a judgement of capability rather than a statement of intention and ‘perceived self-efficacy is a major determinant of intention, but the two constructs are conceptually and empirically separable’ (43). It is also different from the relative power relationship between humans and robots, as individuals’ trait power does not show any correspondence to their attitudes towards SRs (Merdin-Uygur and Ozturkcan Citation2023).

Self-efficacy beliefs are of potential value to explain the variances in the perceived deservingness of robots. In human-AI interactions, customers’ engagement and participation level with the SR in the service context affect their attributions following a service failure (i.e. Fan et al. Citation2020). Unlike those high in self-efficacy, those who are low in self-efficacy regarding robotic technologies are less judgemental of easy-to-use robotic technologies (Dabholkar and Bagozzi Citation2002). Mozafari, Weiger, and Hammerschmidt (Citation2022) demonstrated that users who feel in control of a service outcome (i.e. those high in self-efficacy) are more likely to use the SR again in the future. Following this logic, individuals who perceive themselves as efficient users of robotic technologies (high in self-efficacy) would be less judgemental of negative outcomes (i.e. failure) and report less deservingness of the robot for such bad outcomes. Since these people perceive themselves as more knowledgeable in this domain and perceive the use and control of robotic technologies as easier, self-efficacy is expected to have a negative relationship with deservingness judgements. Therefore, in our research, we propose that the level of self-efficacy regarding robots acts as a moderator between SR failure type and SR deservingness.

Hypothesis 4: Self-efficacy regarding robots moderates the relationship between service robot failure type and deservingness such that the effect diminishes for those with low self-efficacy regarding robots.

shows our conceptual research framework, hypotheses, and the corresponding studies in which we test those relationships.

Figure 2. The conceptual framework.

Figure 2. The conceptual framework.

3. Methodology

There is a call for more methodologically varied research into customer-robot interaction, such as experimental studies (Granulo, Fuchs, and Puntoni Citation2021; Jörling, Böhm, and Paluch Citation2019; Mende et al. Citation2019). In response to such calls for empirical evidence of HRIs, notable laboratory and/or online experiments have looked at robotic service satisfaction (Lajante, Tojib, and Ho Citation2023), punishment judgements regarding humans vs. robots (Guidi et al. Citation2021), or the role of anthropomorphism (Spaccatini, Corlito, and Sacchi Citation2023). Yet, there is no empirical evidence depicting the effect of human interference in robotic services or how observers form their judgements, attitudes, and intentions following such service failure. To test the proposed relationships properly, this research conducted two online, scenario-based experiments using a between-subject experimental design. We manipulated physical human interference with an SR resulting in two different service failures (failure to serve a dish, failure to lift a box). The visuals in all our studies depicted a humanoid SR, as the marketing and service research literature and most of the commercial applications currently available refer to SRs as programmable humanoid social robots (e.g. ‘Pepper’, [Lajante, Tojib, and Ho Citation2023]). Using visuals depicting a human and a human-like SR as experimental manipulations has been established as a reliable and valid practice (Swiderska and Küster Citation2020). Moreover, depicting physical human interference with robots has been employed in the previous literature. In one of the seminal works, participants read about a social robot being stabbed by the researcher in charge with a scalpel (Ward, Olsen, and Wegner Citation2013). In another design, participants were provided with a visualisation prompt regarding hitting a robot with a bat (Tanibe, Hashimoto, and Karasawa Citation2017).

The participants in both studies were recruited from Prolific UK to complete a survey on Qualtrics. Recent empirical research has demonstrated that, compared to alternative platforms, only Prolific provides high data quality on all measures (Peer et al. Citation2021), and it is noted as a source of the most reliable data among alternatives for consumer studies. Moreover, compared to alternatives, Prolific Academic produces higher data quality and a much more diverse participant pool (Peer et al. Citation2017). To ensure ethical data collection, participants were compensated in accordance with the suggested wage at the time of data collection.

3.1. Study 1: main effects

The purpose of Study 1 was to test our first set of hypotheses by investigating whether attitudes towards the SR are more favourable and willingness to engage is higher if the SR fails due to physical interference from a human (vs. not). The context of the robotic service failure chosen was a failure to serve food as a robot waiter. SRs that serve in restaurants are a proliferating area, as California-based Bear Robotics expects to have 10,000 deployed and China-based Pudu Robotics has already deployed more than 56,000 robots worldwide (CBS Citation2023). We manipulated failure with (vs. without) human physical interference using short vignettes accompanied by a visual (see Lajante, Tojib, and Ho (Citation2023) for a similar manipulation).

3.1.1. Participants

Two hundred and eight adults (101 women; 2 preferred not to disclose) recruited via Prolific took part in the current study in exchange for a small payment. Participants were 41.35 years old (SD = 12.61) on average. Most of them reported a medium income (52.4%) and an average interest in robotics (M = 3.59 over 7, SD = 1.72).

3.1.2. Procedure

All participants first read the introduction, which told them that, in the study, they would be asked about their opinions regarding service robots. Then, based on our hypothesised effects, participants were randomly assigned to one of the two conditions:

  • - no human interference in the robotic service failure.

  • - human interference in the robotic service failure.

We manipulated SR failure as failing to serve the dishes in a restaurant setting. The vignettes in both conditions were accompanied by the same robot waiter visual (); this was created via an image generation algorithm that was prompted to draw a realistic, detailed, cinematic image of a humanoid robot waiter in a restaurant dropping a plate of food.

Robot waiters are frequently employed by cafes and restaurants. Imagine a scene where a robot waiter is serving dishes to tables at a café. As the robot approaches a table,

A PERSON THERE HITS THE ROBOT WAITERS LEG and causes the robot waiter to fall down and break the dishes.

Please pay attention and answer the following questions considering this scenario.

vs.

Robot waiters are frequently employed by cafes and restaurants. Imagine a scene where a robot waiter is serving dishes to tables at a café. As the robot approaches a table,

The robot waiter FALLS DOWN and breaks the dishes.

Please pay attention and answer the following questions considering this scenario.

Before collecting the data for the main study, the vignette manipulation instrument was checked through a pre-test to ensure that human interference (vs. no interference) was being manipulated effectively. As expected, the participants reported higher physical interference with the SR in the human interference condition compared to the no interference condition (Minterference = 5.66, Mno interference = 3.34, t(1,98) = 7.103; p < 0.001), and only 1 person failed to mark the correct option amongst Option 1 = A human interfered with the robot and Option 2 = Nobody/nothing interfered with the robot.

Figure 3. AI-generated image of a humanoid robot waiter and a plate of food, as used in Study 1. Source: (IdeogramAI Citation2024).

Figure 3. AI-generated image of a humanoid robot waiter and a plate of food, as used in Study 1. Source: (IdeogramAI Citation2024).

3.1.3. Measures

In the main study, after seeing the manipulation material, the participants were first required to rate the material regarding two items (‘I think there are situations like this in real life’, and ‘The scenario is believable’). Then, participants’ attitudes towards the SR and their willingness to engage with the robot were assessed. Concluding items included the participant’s interest in robotics, gender, income, and age (see for a summary of all measures).

Table 2. Summary of measures used in the studies (in alphabetical order of constructs).

3.1.4. Results

3.1.4.1. Manipulation checks

We first checked whether the service failure manipulation was equally believable and perceived as equally plausible real-life encounters. As expected, the two conditions did not differ in terms of believability (Minterference = 4.57, Mno interference = 4.78, t(1,206) = −0.969; p = 0.334) and of being real-life examples (Minterference = 4.26, Mno interference = 4.37, t(1,206) = −0.513; p = 0.609).

3.1.4.2. Hypotheses testing

Independent samples t-test results revealed that participants reported more negative attitudes towards the SR (Minterference = 4.28, Mno interference = 3.39, t(1,206) = 4.553; p < 0.001) and less willingness to engage with the SR (Minterference = 4.79, Mno interference = 3.99, t(1,206) = 3.017; p = 0.003)Footnote1 when the service failure occurred with no interference (vs. due to human interference) ().

Figure 4. Summary of Study 1’s results.

Figure 4. Summary of Study 1’s results.

3.1.5. Discussion

In Study 1, we supported our main effect hypotheses, demonstrating that participants held less favourable attitudes towards a failed SR without (vs. with) physical human interference. Taking it one step further, participants were more willing to engage with the SR if physical human interference played a role in the service failure (vs not). These findings establish the impact of observing another human interfering with the SR’s job, such as hitting it physically, in terms of how observers form their opinions about the SR. Moreover, participants stated that they would also be more willing to engage with an interfered SR, which is a key desirable feature by the service providers, following a service failure.

3.2. Study 2: moderated mediation model

To demonstrate the mechanisms behind the effect of human interference on outcomes in more detail, we formulated a second experimental study. The purpose of Study 2 was to test the remaining set of hypotheses, arguing that the relationships between failure type and consumer outcomes (attitudes and willingness to engage) are explained by individuals’ judgements regarding the extent to which the SR deserves to fail. Another aim was to test whether the deservingness judgements are affected by how self-efficient the individual feels regarding robots in general. In Study 2, we manipulated robotic failure with vs. without human interference with animated depictions of a humanoid SR failing to hold and lift a box. Two versions of a 5-second GIF demonstrating a robot failing to lift a box were prepared as our stimuli ( and ). Therefore, in addition to replicating Study 1 with an alternative scenario and context, we added a battery of questions measuring participants’ SR deservingness judgements and self-efficacy regarding robots.

Figure 5. Screenshot of the human intervention condition clip.

Figure 5. Screenshot of the human intervention condition clip.

Figure 6. Screenshot of the no intervention condition clip.

Figure 6. Screenshot of the no intervention condition clip.

3.2.1. Participants

Two hundred and two adults (100 women; 3 preferred not to disclose) recruited via Prolific took part in the current study in exchange for a small payment. Participants were 41.18 years old (SD = 12.06) on average. Most of them reported a medium income (48%).

3.2.2. Procedure

All participants first read the introduction, which stated that, in the study, they would be asked about their opinions about service robots. The participants were assigned randomly to one of the two conditions of failure with or without human interference.

Now, please carefully examine the following short video of a service robot to evaluate it in the following questions.

YOU HAVE TO PLAY THE WHOLE VIDEO – FROM THE BEGINNING TO THE END – SINCE THE FOLLOWING QUESTIONS WILL BE ABOUT THIS CONTENT.

Both conditions were identical in terms of the robot’s physical appearance, the box it tries to carry, contextual details such as the background, and the size and length of the clip. In order to control for any confounding effects, the brand name was not shown in these clips. We also measured the time spent on this question in a disguised manner to make sure the participants did not skip over it rapidly.

3.2.3. Measures

Participants first responded to manipulation check questions regarding the extent of human interference with the robot and the credibility of the material. Their attitudes towards the SR, willingness to engage with the robot, and demographics were measured as in Study 1. In addition, the deservingness of the SR was assessed by a four-item scale (sample item: the robot deserves the bad that happens to it), and the participants’ self-efficacy regarding robots was assessed by a four-item scale (sample item: I have enough skills to use a robot) (see for a summary of all measures).

3.2.4. Results

3.2.4.1. Manipulation checks

As expected, the two service failure conditions did not differ in terms of credibility (Minterference = 4.34, Mno interference = 4.12, t(1,200) = 1.272; p = 0.205). Participants reported higher physical interference with the SR in the interference condition vs. the no interference condition (Minterference = 6.55, Mno interference = 1.44, t(1,200) = 33.084; p < 0.001). We also checked the amount of time participants spent watching the video manipulations (Minterference = 31.36, Mno interference = 38.72, t(1,200) = −0.524; p = 0.601).

3.2.4.2. Hypotheses testing

Independent samples t-test results revealed that Study 2 results replicated the main effect of human interference on attitudes (Minterference = 4.92, Mno interference = 2.80, t(1,200) = 12.336; p < 0.001) as well as willingness to engage scores (Minterference = 5.31, Mno interference = 3.22, t(1,200) = 8.524; p < 0.001)Footnote2 ().

Figure 7. Summary of Study 2’s results.

Figure 7. Summary of Study 2’s results.

Mediation was assessed with the bootstrapping method (Preacher and Hayes Citation2008), using Hayes’s PROCESS macro, setting a 95% confidence interval (CI) and 10,000 iterations. The mediation model indicates that the effect of physical human interference on attitudes is mediated by the perceptions of deservingness (B = −.2417, SE = .1300, 95% CI [−.5186, −.0056]),Footnote3 qualified by a main effect of failure type (Minterference = 2.33, Mno interference = 2. 21, t(1,200) = −6.593; p < 0.001). The moderated mediation model using SPSS’s PROCESS Macro Model 7 was significant (B = −.1244, SE = .0665, 95% C.I. [−.2711, −.0080]),Footnote4 qualified by a significant effect of the interaction of self-efficacy and interference type on the deservingness of the SR (B = .2489, SE = .1111, 95% CI [.0298, .4680]) (). Detailed floodlight analysis revealed that among those with a higher self-efficacy regarding robots (2.65 over 7 and higher), the SR failing due to human interference was perceived as less deserving of failure compared to the SR failing without any interference ().

Figure 8. A visual depiction of the interaction between SR failure type and self-efficacy regarding robots.

Figure 8. A visual depiction of the interaction between SR failure type and self-efficacy regarding robots.

Next, we ran the moderated mediation models on individuals’ willingness to engage with the SR. The effect of physical human interference on willingness to engage was also mediated by the perceptions of deservingness (B = −.5205, SE = .2014, 95% CI [−.9394, −.1537])Footnote5 and the moderated mediation model was significant (B = −.1790, SE = .1024, 95% C.I. [−.4045, −.0060])Footnote6 ().

Figure 9. Tested research model.

Figure 9. Tested research model.

3.2.5. Discussion

The results replicated the main effect of robotic service failure with (vs. without) human interference on customer attitudes and willingness to engage with the SR using a distinct manipulation of service failure (i.e. box lift failure). Furthermore, we successfully adapted deservingness in interpersonal relationships into human-robot relationships by demonstrating that differences in the robot’s deservingness of bad outcomes explain the differences in consumer attitudes and willingness to engage further with the SR.

We also demonstrated that individuals who are low in self-efficacy regarding robots report that SRs – if not interfered with – are still deserving of failure and any poor outcomes that may happen. Our analysis revealed that only those with high self-efficacy regarding SRs perceived the SR that fails due to human interference (vs. no interference) as less deserving of failure.

4. General Discussion

4.1. Theoretical implications

Although failure is a common problem in services involving HRI, research investigating people’s perceptions, reactions, and attitudes toward various types of failure is scarce (Garza Citation2018) with regard to human interference. An extant review of robotic service failures by Liu et al. (Citation2023) concluded that service failures are inevitable, thus showing an urgent need to explore the undesirable outcomes of robot service failure systematically.

The present study makes several theoretical contributions. First, we extend the research on the internal vs. external roots of SR failure. Previous research demonstrated that individuals are more likely to attribute responsibility externally in service failure but not in service success (Mozafari, Weiger, and Hammerschmidt Citation2022). When a technology malfunctions during service delivery, customers anticipate the further involvement of a human employee (De Keyser et al. Citation2015). We introduce a novel aspect to the study of service failure’s external roots by considering the presence of physically interfering individuals. In addition, previous research focused on the role of humans as the solution to service failure but not as the co-actor or inducer of it. Thus, in accordance with the harm-made mind theory proposed by Ward, Olsen, and Wegner (Citation2013), our attention is directed towards the examination of human physical interference to cause harm as a primary factor contributing to service failure. We empirically tested physical human intervention in contrast with larger conceptual models on robotic service failure taxonomies, which only mention human deliberate violations as part of a much larger list of causes (Honig and Oron-Gilad Citation2018).

Our results also shed further light on contradictory findings regarding post-failure attitudes towards service robots. While a major body of empirical work supports the counterintuitive expectation that SRs will be judged more harshly in the case of a failure (vs. non-failure) (i.e. Lee et al. Citation2010), SRs are judged less harshly than human service providers in the case of a failure (Leo and Huh Citation2020). Interestingly, there are contradictory empirical findings that demonstrate people like the faulty robot significantly more than the flawless one (Mirnig et al. Citation2017), replicating the Pratfall Effect in robots (Aronson, Willerman, and Floyd Citation1966). Taken together, our results show that people judge SR failures due to human interference not as harshly as failures with no human intervention. This nuanced understanding is crucial for the continued development and acceptance of SRs in various domains, fostering a more comprehensive understanding of user reactions in real-world scenarios.

Our study contributes to the much-needed but currently underexamined theory development regarding the ethics and rights of SRs. Practitioners as well as academics soon expect legal personhood to be accorded to SRs (Casella and Croucher Citation2011). Going a step further, some advocates argue for humanoid robots to be endowed with rights comparable to those afforded to companion animals (Kelley et al. Citation2010). Highlighting the fact that robots are becoming parts of the human experience, used in a variety of roles and services, even sex and intimacy, Belk (Citation2018) claimed it has become imperative to research and address them, with important implications for public policy and applications. Our findings add to this discussion by showing how the observer customers’ attitudes and willingness to engage with the SRs are influenced by the presence or absence of physical human interference in service robot failures. We demonstrate that the observer customers’ perceptions of the SRs’ deservingness of respect and protection are affected by human interference, and that this effect is moderated by the observer customers’ self-efficacy regarding robots. We suggest that these results have ethical implications for the design, regulation, and use of SRs, as they reveal the potential biases and prejudices that humans may have towards SRs, and the need to foster a more positive and respectful human-robot relationship.

Our findings contribute to the literature on customer forgiveness of SR failures by examining how the presence or absence of physical human interference in service robot failures influences the observer customers’ attitudes towards and willingness to engage. Previous studies have suggested that forgiveness is a complex and multidimensional process that involves both cognitive and emotional aspects (Choi, Mattila, and Bolton Citation2021), and examined ways to minimise failure (Lee et al. Citation2010) to gracefully mitigate the effects of service failures on customers to sustain satisfaction and preventing them from abandoning a robotic service (Ho, Tojib, and Tsarenko Citation2020; Lee et al. Citation2010). While Cheng (Citation2023) investigated the influence of anthropomorphism in service failure, still little is known about how forgiveness operates in the presence of deliberate third-party physical human interferences to SRs. Our results pave the way for further research to explore other factors that may influence forgiveness in human-service robot interactions, such as the type and severity of the failure induced by the interfering third-party human, the relationship quality between the observing customer and the SR, and the SR’s response, apology and repair behaviours to both the third-party human and the observing customer.

Finally, our findings show that the level of self-efficacy exhibited by the individual customer plays a role in the way they form attitudes and behavioural intentions following a robotic service failure. Previous research has shown that consumers’ level of self-efficacy regarding certain technologies greatly influences their responses to these technologies’ failures, such as blame attribution and dissatisfaction levels (Fan et al. Citation2020), due to their perception of possessing superior knowledge and control. However, past research focused on the direct effects of self-efficacy on customer responses, without considering the potential moderating effects of other factors, such as the type and cause of the failure. Our study extends this literature by demonstrating that self-efficacy interacts with the presence or absence of human interference in service robot failures, such that the effect of human interference on the perceived deservingness of the robot is stronger for those with high self-efficacy than for those with low self-efficacy. This finding suggests that self-efficacy is not a static or uniform construct, but rather a dynamic and context-dependent one, that changes our view on robot (failure) perceptions depending on the situation. Our finding also reveals an important boundary condition of the self-efficacy effect in response to technological service failures, as it indicates that the negative impact of human interference on the perceived deservingness of the robot is attenuated for those with low self-efficacy, who may be more forgiving or empathetic towards the robot.

4.2. Practical implications

Many service firms employing robots on service frontlines are overall confident that innovation automatically drives service efficiency and customer satisfaction. Yet, frequently occurring service failures represent a great challenge to achieving SR acceptance (Mozafari, Weiger, and Hammerschmidt Citation2022).

Among the situations leading to failure, practical evidence, as well as news and opinion pieces, suggests that people are likely to continue interfering with the robots (Bromwich Citation2019). Many SRs are physical and tangible in nature, performing multitudes of service encounters face-to-face. The unique physical embodiment of robots facilitates the potential for physical interactions between machines and humans (Hoffmann and Krämer Citation2021). By demonstrating how human interference in robotic service failure affects attitudes and future engagement willingness, we aim to assist robotic service providers to maintain safety, ensure a positive user experience, optimise the service, and guarantee long-term use (Klüber and Onnasch Citation2023) of their services.

Our findings have several implications for managers and practitioners who design, deploy, and operate SRs in various service contexts (). First, our findings suggest that human interference in robotic service failure can reduce the negative evaluations of the robot by the observer customers, as they may perceive the robot as less responsible and more deserving of forgiveness. This implies that managers and practitioners should not ignore or conceal the human interference factor when communicating with the customers about the robot failure, but rather use it as an opportunity to explain the cause and the solution of the failure, and to elicit sympathy and empathy for the robot. For example, managers and practitioners could design user-friendly error messages that acknowledge human interference and express the robot’s regret and apology, and even request customer’s cooperation and assistance (). Such interface messages could enhance the customers’ understanding and cooperation and reduce their frustration and dissatisfaction upon SR failure due to third-party human interference.

Table 3. Some practical suggestions.

Second, our findings show that the level of self-efficacy exhibited by the individual customer plays a role in the way they form attitudes and behavioural intentions following a robotic service failure. Specifically, we found that the effect of human interference on the perceived deservingness of the robot is stronger for those with high self-efficacy than for those with low self-efficacy. This implies that managers and practitioners should be aware of the different needs and preferences of customers with different levels of self-efficacy regarding robots, and tailor their HRI strategies accordingly.

Third, our findings highlight the importance of marketing SRs together with providing clear and accessible information about the benefits of SRs, the role of human intervention in occasions of failure, and the reliability of these machines. This implies that managers and practitioners should not only promote the positive features and advantages of SRs, but also educate and inform the customers about the potential challenges and difficulties that SRs may face, and how they can be prevented or resolved. Advertisements often offer a venue through which customers can observe physical interventions, yet many public showcases are remembered and discussed together with the prevalence of failures of SRs, such as falling off the stage in press conferences (Kelion Citation2018). Instead, managers and practitioners could use advertisements, brochures, videos, or websites to showcase the SRs’ capabilities and performance, as well as to demonstrate the scenarios and consequences of human interference in robotic service failure, and the ways to avoid or cope with them (). Raising customers’ awareness and understanding of the human interference issue can influence their attitudes and behaviours towards SRs.

Next, we discuss some limitations of the present research together with recommendations for future research.

5. Limitations and suggestions for future research

This study demonstrates the differences in key customer outcomes following a robotic service failure and shows the importance of physical interference in causing the failure. However, in real life, third-party human interventions in SRs may also be intangible and unobserved compared to physical interferences. For example, SRs also fail due to design mistakes, processing failures in timing and ordering, or environmental factors such as their working environment (Honig and Oron-Gilad Citation2018). Thus, future research may extend the literature by focusing on any one of those interventions, for example, whether there would be a difference in deservingness judgements if the SR fell due to an accidental encounter with a human vs. a deliberate one.

While some robot malfunctions are detectable by immediate changes in the robot’s behaviour or physique (e.g. falling down) (Kwon, Huang, and Dragan Citation2018; Takayama and Dooley Citation2011), some other failures may have no obvious symptoms. An algorithm that is learning in the wrong direction and misinterpreting the service outcomes may result in overarching problems in human-SR interactions after a long period of time. Hence, while in this study, we focus rather on easily observable and immediate failures, such as dropping the box or the plate, future research may investigate long-term interference by humans, such as malicious coding or sabotage.

Future research could also attempt longitudinal designs to assess how customer-robot relationships develop over repeated interactions (Gutek et al. Citation1999), such as in a series of service failures and recoveries. Rather than a cross-sectional investigation, such a methodology would increase the real-life implications and external validity of the results.

Regarding the methodology, we opted to conduct online experiments using visual manipulations of SR failure to test our proposed hypotheses. We carefully chose our scenarios as a result of many pre-tests, manipulation checks, and an extensive review of not only the robotic services literature but also the service failure literature and the social psychological literature on violence and harm. Still, Lajante, Tojib, and Ho (Citation2023) noted that the majority of robotic service research leans heavily on surveys and hypothetical situations in which customers do not engage, resulting in customers’ reported attitudes and intentions being primarily linked to their beliefs, whereas real interactions with SRs may yield different outcomes. Future research may seek to verify our findings using laboratory experiments combined with online experiments (i.e. Grundke Citation2023), noting the risk of low sample sizes in laboratory-based robotic service failure experiments (Garza Citation2018). Using virtual reality (VR) technology to study HRI (i.e. Dang and Liu Citation2023; Klüber and Onnasch Citation2023) would also be a methodological contribution to the field.

The rest of our recommendations at the micro, meso, and macro levels of future recommendations are summarised in as an agenda for future researchers.

Figure 10. An agenda for future research.

Figure 10. An agenda for future research.

Acknowledgement

We would like to thank the associate editor and the anonymous reviewers for taking the time and effort necessary to review the manuscript. We sincerely appreciate all valuable comments and suggestions, which helped us to improve the quality of the manuscript.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 The direction and significance of the effect persist in subsequent MANOVA analyses accounting for demographics and involvement as control variables. Differences in attitudes: p < 0.001, Partial eta-squared (η²p) =  .087, Observed power = .992; Differences in willingness to engage: p = 0.003, Partial eta-squared (η²p) =  .043, Observed power = .850.

2 The significance of the results persists in subsequent MANOVA analyses accounting for demographics (gender, age, income) as control variables. Differences in attitudes: F(1,197) = 145.379, p < 0.001, Partial eta-squared (η²p) = .425, Observed power = 1.000; Differences on willingness to engage: F(1,197) = 69.684, p < 0.001, Partial eta-squared (η²p) = .261, Observed power  = 1.000.

3 The significance of the mediation model persists accounting for demographics (gender, age, income) as control variables (B = −.4350, SE = .1132, 95% CI [−.6754, −.2340]).

4 The significance of the moderation mediation model persists accounting for demographics (gender, age, income) as control variables (B = −.1200, SE = .0651, 95% CI [−0.2598, −0.0042]).

5 The significance of the mediation model persists accounting for demographics (gender, age, income) as control variables (B = −.6192, SE = .1565, 95% CI [−.9522, −.3405]).

6 The significance of the moderation mediation model persists accounting for demographics (gender, age, income) as control variables (B = −.1708, SE = .0981, 95% CI [−0.3895, −0.0073]).

References

  • Aronson, Elliot, Ben Willerman, and Joanne Floyd. 1966. “The Effect of a Pratfall on Increasing Interpersonal Attractiveness.” Psychonomic Science 4 (6): 227–228. https://doi.org/10.3758/BF03342263.
  • Bandura, Albert. 1977. “Self-Efficacy: Toward a Unifying Theory of Behavioral Change.” Psychological Review 84 (2): 191–215. https://doi.org/10.1037/0033-295X.84.2.191.
  • Bandura, Albert. 1997. Self-Efficacy: The Exercise of Control. New York: Worth Publishers.
  • Bandura, Albert. 2001. “Social Cognitive Theory: An Agentic Perspective.” Annual Review of Psychology 52 (1): 1–26. https://doi.org/10.1146/annurev.psych.52.1.1.
  • Bandura, Albert. 2006. “Guide for Constructing Self-Efficacy Scales.” In Self-efficacy Beliefs of Adolescents, edited by Frank Pajares and Timothy C. Urdan, 307–337. Greenwich, CT: IAP - Information Age Pub., Inc.
  • Barfield, Jessica K. 2023. “Discrimination Against Robots: Discussing the Ethics of Social Interactions and who is Harmed.” Paladyn, Journal of Behavioral Robotics 14 (1): 20220113. https://doi.org/10.1515/pjbr-2022-0113.
  • Becker, Marc, Dominik Mahr, and Gaby Odekerken-Schröder. 2023. “Customer Comfort During Service Robot Interactions.” Service Business 17 (1): 137–165. https://doi.org/10.1007/s11628-022-00499-4.
  • Belk, Russell. 2018. “Ownership: The Extended Self and the Extended Object.” In Psychological Ownership and Consumer Behavior, edited by J. Peck and S. Shu, 53–67. Cham: Springer. https://doi.org/10.1007/978-3-319-77158-8_4.
  • Bromwich, J. E. 2019. “Why Do We Hurt Robots?” The New York Times. https://www.nytimes.com/2019/01/19/style/why-do-people-hurt-robots.html.
  • Callan, Mitchell J., Aaron C. Kay, and Rael J. Dawtry. 2014. “Making Sense of Misfortune: Deservingness, Self-Esteem, and Patterns of Self-Defeat.” Journal of Personality and Social Psychology 107 (1): 142–162. https://doi.org/10.1037/a0036640.
  • Casella, Eleanor, and Karina Croucher. 2011. “Beyond Human: The Materiality of Personhood.” Feminist Theory 12 (2): 209–217. https://doi.org/10.1177/1464700111404264.
  • CBS. 2023. “Are Robot Waiters the Wave of the Future? Some Restaurants Say Yes.” CBS News. https://www.cbsnews.com/news/robot-waiters-restaurants-future/.
  • Chang, Mai Lee, Greg Trafton, J. Malcolm McCurry, and Andrea Lockerd Thomaz. 2021. “Unfair! Perceptions of Fairness in Human-Robot Teams.” 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN).
  • Chen, Nuoya, Smaraki Mohanty, Jinfeng Jiao, and Xiucheng Fan. 2021. “To Err is Human: Tolerate Humans Instead of Machines in Service Failure.” Journal of Retailing and Consumer Services 59: 102363. https://doi.org/10.1016/j.jretconser.2020.102363.
  • Cheng, Li-Keng. 2023. “Effects of Service Robots’ Anthropomorphism on Consumers’ Attribution Toward and Forgiveness of Service Failure.” Journal of Consumer Behaviour 22 (1): 67–81. https://doi.org/10.1002/cb.2112.
  • Choi, Sungwoo, Anna S. Mattila, and Lisa E. Bolton. 2021. “To Err is Human(-Oid): How Do Consumers React to Robot Service Failure and Recovery?” Journal of Service Research 24 (3): 354–371. https://doi.org/10.1177/1094670520978798.
  • Compeau, Deborah R., and Christopher A. Higgins. 1995. “Computer Self-Efficacy: Development of a Measure and Initial Test.” MIS Quarterly 19 (2): 189. https://doi.org/10.2307/249688.
  • Dabholkar, Pratibha A., and Richard P. Bagozzi. 2002. “An Attitudinal Model of Technology-Based Self-Service: Moderating Effects of Consumer Traits and Situational Factors.” Journal of the Academy of Marketing Science 30 (3): 184–201. https://doi.org/10.1177/0092070302303001.
  • Dang, Jianning, and Li Liu. 2023. “How External Monitoring Can Mitigate Cyberloafing: Understanding the Mediating and Moderating Roles of Employees’ Self-Control.” Behaviour & Information Technology ahead-of-print (ahead-of-print): 1–15. https://doi.org/10.1080/0144929X.2023.2249110.
  • De Keyser, Arne, Katherine Lemon, Phil Klaus, and Timothy Keiningham. 2015. “A Framework for Understanding and Managing the Customer Experience”.
  • Emir, Ebru. 2022. “Evaluation of Laban Effort Features Based on the Social Attributes and Personality of Domestic Service Robots.” Master of Applied Science, University of Waterloo.
  • Fan, Alei, Luorong Wu, Li Miao, and Anna S. Mattila. 2020. “When Does Technology Anthropomorphism Help Alleviate Customer Dissatisfaction After a Service Failure? – The Moderating Role of Consumer Technology Self-Efficacy and Interdependent Self-Construal.” Journal of Hospitality Marketing & Management 29 (3): 269–290. https://doi.org/10.1080/19368623.2019.1639095.
  • Frey, Carl Benedikt, and Michael A. Osborne. 2017. “The Future of Employment: How Susceptible are Jobs to Computerisation?” Technological Forecasting and Social Change 114: 254–280. https://doi.org/10.1016/j.techfore.2016.08.019.
  • Garza, Cecilia Gabriela Morales. 2018. “Failure is an Option: How the Severity of Robot Errors Affects Human-Robot Interaction.” Masters of Science in Robotics, School of Computer Science, Carnegie Mellon University.
  • Granulo, Armin, Christoph Fuchs, and Stefano Puntoni. 2021. “Preference for Human (vs. Robotic) Labor is Stronger in Symbolic Consumption Contexts.” Journal of Consumer Psychology 31 (1): 72–80. https://doi.org/10.1002/jcpy.1181.
  • Gray, Kurt, and Daniel M. Wegner. 2012. “Feeling Robots and Human Zombies: Mind Perception and the Uncanny Valley.” Cognition 125 (1): 125–130. https://doi.org/10.1016/j.cognition.2012.06.007.
  • Gretzel, Ulrike, and Jamie Murphy. 2019. “Making Sense of Robots -Consumer Discourse on Robots in Tourism and Hospitality Service Settings.” In Robots, Artificial Intelligence, and Service Automation in Travel, Tourism and Hospitality, edited by Stanislav Ivanov, and Craig Webster, 93–104. London: Emerald Publishing Limited.
  • Grundke, Andrea. 2023. “If Machines Outperform Humans: Status Threat Evoked by and Willingness to Interact with Sophisticated Machines in a Work-Related Context *.” Behaviour & Information Technology ahead-of-print (ahead-of-print): 1–17. https://doi.org/10.1080/0144929X.2023.2210688.
  • Guidi, Stefano, Enrica Marchigiani, Sergio Roncato, and Oronzo Parlangeli. 2021. “Human Beings and Robots: Are There any Differences in the Attribution of Punishments for the Same Crimes?” European conference on cognitive ergonomics 2021.
  • Gutek, Barbara A., Anita D. Bhappu, Matthew A. Liao-Troth, and Bennett Cherry. 1999. “Distinguishing Between Service Relationships and Encounters.” Journal of Applied Psychology 84 (2): 218–233. https://doi.org/10.1037/0021-9010.84.2.218.
  • Harris, Jamie, and Jacy Reese Anthis. 2021. “The Moral Consideration of Artificial Entities: A Literature Review.” Science and Engineering Ethics 27 (4): 53. https://doi.org/10.1007/s11948-021-00331-8.
  • Harrison-Walker, L. Jean. 2012. “The Role of Cause and Affect in Service Failure.” Journal of Services Marketing 26 (2): 115–123. https://doi.org/10.1108/08876041211215275.
  • Hesapci, Ozlem, Ezgi Merdin, and Sahika Gorgulu. 2016. “Your Ethnic Model Speaks to the Culturally Connected: Differential Effects of Model Ethnicity in Advertisements and the Role of Cultural Self-Construal.” Journal of Consumer Behaviour 15 (2): 175–185. https://doi.org/10.1002/cb.1562.
  • Ho, Ting Hin, Dewi Tojib, and Yelena Tsarenko. 2020. “Human Staff vs. Service Robot vs. Fellow Customer: Does it Matter who Helps Your Customer Following a Service Failure Incident?” International Journal of Hospitality Management 87: 102501. https://doi.org/10.1016/j.ijhm.2020.102501.
  • Hoffmann, Laura, and Nicole C. Krämer. 2021. “The Persuasive Power of Robot Touch. Behavioral and Evaluative Consequences of Non-Functional Touch from a Robot.” PLoS One 16 (5): e0249554. https://doi.org/10.1371/journal.pone.0249554.
  • Honig, Shanee, and Tal Oron-Gilad. 2018. “Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development.” Frontiers in Psychology 9: 861. https://doi.org/10.3389/fpsyg.2018.00861.
  • Huang, Yu-Shan, and Paula Dootson. 2022. “Chatbots and Service Failure: When Does it Lead to Customer Aggression.” Journal of Retailing and Consumer Services 68: 103044. https://doi.org/10.1016/j.jretconser.2022.103044.
  • IdeogramAI. 2024. IdeogramAI. https://ideogram.ai/.
  • Ivanov, Stanislav, Craig Webster, Elitza Stoilova, and Daniel Slobodskoy. 2022. “Biosecurity, Crisis Management, Automation Technologies and Economic Performance of Travel, Tourism and Hospitality Companies – A Conceptual Framework.” Tourism Economics 28 (1): 3–26. https://doi.org/10.1177/1354816620946541.
  • Jörling, Moritz, Robert Böhm, and Stefanie Paluch. 2019. “Service Robots: Drivers of Perceived Responsibility for Service Outcomes.” Journal of Service Research 22 (4): 404–420. https://doi.org/10.1177/1094670519842334.
  • Jost, John T., Mahzarin R. Banaji, and Brian A. Nosek. 2004. “A Decade of System Justification Theory: Accumulated Evidence of Conscious and Unconscious Bolstering of the Status Quo.” Political Psychology 25 (6): 881–919. https://doi.org/10.1111/j.1467-9221.2004.00402.x.
  • Kalogianni, Alexander. 2015. “Toyota Jumpstarts Robotic Elderly Care with the HSR Robot Prototype,” July 16. Accessed March 12, 2024. https://www.digitaltrends.com/cars/toyota-develops-human-support-robot-for-elder-care/.
  • Kay, Aaron C., Danielle Gaucher, Jamie L. Napier, Mitchell J. Callan, and Kristin Laurin. 2008. “God and the Government: Testing a Compensatory Control Mechanism for the Support of External Systems.” Journal of Personality and Social Psychology 95 (1): 18–35. https://doi.org/10.1037/0022-3514.95.1.18.
  • Kelion, Leo. 2018. “CES 2018: LG Robot Cloi Repeatedly Fails on Stage at its Unveil.” BBC. https://www.bbc.com/news/technology-42614281.
  • Kelley, Richard, Enrique Schaerer, Micaela Gomez, and Monica Nicolescu. 2010. “Liability in Robotics: An International Perspective on Robots as Animals.” Advanced Robotics 24 (13): 1861–1871. https://doi.org/10.1163/016918610X527194
  • Kim, Taemie, and Pamela Hinds. 2006. “Who Should I Blame? Effects of Autonomy and Transparency on Attributions in Human-Robot Interaction.” ROMAN 2006 – The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, September 6–8.
  • Klüber, Kim, and Linda Onnasch. 2023. “Keep Your Distance! Assessing Proxemics to Virtual Robots by Caregivers.” Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction.
  • Koban, Kevin, Brad A. Haggadone, and Jaime Banks. 2021. “The Observant Android: Limited Social Facilitation and Inhibition from a Copresent Social Robot.” Technology, Mind, and Behavior 2 (3). https://doi.org/10.1037/tmb0000049.
  • Krumins, Aaron. 2015. “Artificial Intelligence: Coming to a Sexbot Near You.” https://www.extremetech.com/uncategorized/208181-artificial-intelligence-coming-to-a-sexbot-near-you.
  • Küster, Dennis, and Aleksandra Swiderska. 2021. “Seeing the Mind of Robots: Harm Augments Mind Perception But Benevolent Intentions Reduce Dehumanisation of Artificial Entities in Visual Vignettes.” International Journal of Psychology 56 (3): 454–465. https://doi.org/10.1002/ijop.12715.
  • Kwak, Sonya S., Yunkyung Kim, Eunho Kim, Christine Shin, and Kwangsu Cho. 2013. “What Makes People Empathize with an Emotional Robot?: The Impact of Agency and Physical Embodiment on Human Empathy for a Robot.” 2013 IEEE RO-MAN.
  • Kwon, Minae, Sandy H. Huang, and Anca D. Dragan. 2018. “Expressing Robot Incapability.” Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY.
  • Lajante, Mathieu, David Remisch, and Nikita Dorofeev. 2023. “Can Robots Recover a Service Using Interactional Justice as Employees Do? A Literature Review-Based Assessment.” Service Business 17 (1): 315–357. https://doi.org/10.1007/s11628-023-00525-z.
  • Lajante, Mathieu, Dewi Tojib, and TingHin Ho. 2023. “When Interacting with a Service Robot is (not) Satisfying: The Role of Customers’ Need for Social Sharing of Emotion.” Computers in Human Behavior 146: 107792. https://doi.org/10.1016/j.chb.2023.107792.
  • Lee, Min Kyung, Sara Kiesler, Jodi Forlizzi, Siddhartha Srinivasa, and Paul Rybski. 2010. “Gracefully Mitigating Breakdowns in Robotic Services.” 2010 5th ACM/IEEE international conference on human-robot interaction (HRI).
  • Leo, Xuying, and Young Eun Huh. 2020. “Who Gets the Blame for Service Failures? Attribution of Responsibility Toward Robot Versus Human Service Providers and Service Firms.” Computers in Human Behavior 113: 106520. https://doi.org/10.1016/j.chb.2020.106520.
  • Lerner, Melvin J., Dale T. Miller, and John G. Holmes. 1976. “Deserving and the Emergence of Forms of Justice.” In Advances in Experimental Social Psychology. Vol. 9, edited by Leonard Berkowitz and Elaine Walster, 133–162. Academic Press. https://doi.org/10.1016/S0065-2601(08)60060-X.
  • Letheren, Kate, Rebekah Russell-Bennett, and Lucas Whittaker. 2020. “Rogues and Deviants: A Game-Theoretic Perspective on Opportunism in Strategic Alliances.” Journal of Marketing Management 36 (1-2): 1–29. https://doi.org/10.1080/0267257X.2019.1684975.
  • Liu, Dewen, Changfei Li, Jieqiong Zhang, and Weidong Huang. 2023. “Robot Service Failure and Recovery: Literature Review and Future Directions.” International Journal of Advanced Robotic Systems 20 (4): 17298806231191606. https://doi.org/10.1177/17298806231191606.
  • Lteif, Lama, Dan Rubin, Joan Ball, and Cait Lamberton. 2023. “There’s Not Much to Tell: The Impact of Emotional Resilience on Negative Word-of-Mouth Following Service Failure.” Psychology & Marketing 40 (9): 1808–1820. https://doi.org/10.1002/mar.21856.
  • Lteif, Lama, and Ana Valenzuela. 2022. “The Effect of Anthropomorphized Technology Failure on the Desire to Connect with Others.” Psychology & Marketing 39. https://doi.org/10.1002/mar.21700.
  • Marakas, George, Richard Johnson, and Paul Clay. 2007. “The Evolving Nature of the Computer Self-Efficacy Construct: An Empirical Investigation of Measurement Construction, Validity, Reliability and Stability Over Time.” Journal of the Association for Information Systems 8 (1): 16–46. https://doi.org/10.17705/1jais.00112.
  • Marriott, Hannah R., and Valentina Pitardi. 2024. “One is the Loneliest Number … Two Can be as Bad as One. The Influence of AI Friendship Apps on Users’ Well-Being and Addiction.” Psychology & Marketing 41 (1): 86–101. https://doi.org/10.1002/mar.21899.
  • Mende, Martin, Maura Scott, Jenny van Doorn, Dhruv Grewal, and Ilana Shanks. 2019. “Service Robots Rising: How Humanoid Robots Influence Service Experiences and Elicit Compensatory Consumer Responses.” Journal of Marketing Research 56: 535–556. https://doi.org/10.1177/0022243718822827.
  • Merdin-Uygur, Ezgi, and Selcen Ozturkcan. 2023. “Consumers and Service Robots: Power Relationships Amid COVID-19 Pandemic.” Journal of Retailing and Consumer Services 70: 103174. https://doi.org/10.1016/j.jretconser.2022.103174.
  • Meyer, Nika, Melanie Schwede, Maik Hammerschmidt, and Welf Hermann Weiger. 2023. “Users Taking the Blame? How Service Failure, Recovery, and Robot Design Affect User Attributions and Retention.” Electronic Markets 32 (4): 2491–2505. https://doi.org/10.1007/s12525-022-00613-4.
  • Mirnig, Nicole, Gerald Stollnberger, Markus Miksch, Susanne Stadler, Manuel Giuliani, and Manfred Tscheligi. 2017. “To Err Is Robot: How Humans Assess and Act Toward an Erroneous Social Robot.” Frontiers in Robotics and AI 4: 21. https://doi.org/10.3389/frobt.2017.00021.
  • Mozafari, Nika, Welf H. Weiger, and Maik Hammerschmidt. 2022. “Trust Me, I’m a Bot – Repercussions of Chatbot Disclosure in Different Service Frontline Settings.” Journal of Service Management 33 (2): 221–245. https://doi.org/10.1108/JOSM-10-2020-0380.
  • Murray, Ross Paul. 2022. Self-Service Technologies That Appear Human Interacting with Customers: Effects on Third-Party Observers. Ph.D. Dissertation. The University of Texas Rio Grande Valley. Rio Grande City. Advisor(s) Sheng, Xiaojing. Order Number: AAI28969517. Committee Members: Felix, Reto, Minor, Michael, and Belk, Russell W.
  • Nichols, Shaun. 2005. Sentimental Rules: On the Natural Foundations of Moral Judgement. online ed. New York: Oxford Academic. https://doi.org/10.1093/0195169344.001.0001.
  • Ninomiya, Takumi, Akihito Fujita, Daisuke Suzuki, and Hiroyuki Umemuro. 2015. “Development of the Multi-Dimensional Robot Attitude Scale: Constructs of People’s Attitudes Towards Domestic Robots.” In Social Robotics. ICSR 2015. Lecture Notes in Computer Science. Vol. 9388, edited by A. Tapus, E. André, J. C. Martin, F. Ferland, and M. Ammi, 482–491. Cham: Springer. https://doi.org/10.1007/978-3-319-25554-5_48.
  • Ozturkcan, Selcen, and Ezgi Merdin-Uygur. 2022. “Humanoid Service Robots: The Future of Healthcare?” Journal of Information Technology Teaching Cases 12 (2): 163–169. https://doi.org/10.1177/20438869211003905.
  • Palmeira, Mauricio, Minjung Koo, and Hyun-Ah Sung. 2022. “You Deserve the bad (or Good) Service: The Role of Moral Deservingness in Observers’ Reactions to Service Failure (or Excellence).” European Journal of Marketing 56 (3): 653–676. https://doi.org/10.1108/EJM-09-2020-0659.
  • Park, Gain, Myungok Chris Yim, Jiyun Chung, and Seyoung Lee. 2023. “Effect of AI Chatbot Empathy and Identity Disclosure on Willingness to Donate: The Mediation of Humanness and Social Presence.” Behaviour & Information Technology 42 (12): 1998–2010. https://doi.org/10.1080/0144929X.2022.2105746.
  • Peer, Eyal, Laura Brandimarte, Sonam Samat, and Alessandro Acquisti. 2017. “Beyond the Turk: Alternative Platforms for Crowdsourcing Behavioral Research.” Journal of Experimental Social Psychology 70: 153–163. https://doi.org/10.1016/j.jesp.2017.01.006.
  • Peer, Eyal, David Rothschild, Andrew Gordon, Zak Evernden, and Ekaterina Damer. 2021. “Data Quality of Platforms and Panels for Online Behavioral Research.” Behavior Research Methods 54 (4): 1643–1662. https://doi.org/10.3758/s13428-021-01694-3.
  • Preacher, Kristopher J., and Andrew F. Hayes. 2008. “Contemporary Approaches to Assessing Mediation in Communication Research.” In The SAGE Sourcebook of Advanced Data Analysis Methods for Communication Research, edited by A. F. Hayes, D. Slater, and L. B. Snyder, 13–54. Thousand Oaks, CA: Sage.
  • Pütten, Astrid Rosenthal-Von Der, and Nikolai Bock. 2018. “Development and Validation of the Self-Efficacy in Human-Robot-Interaction Scale (SE-HRI).” ACM Transactions on Human-Robot Interaction 7 (3): 1–30. https://doi.org/10.1145/3139352.
  • Pütten, Astrid M. Rosenthal-von der, Frank P. Schulte, Sabrina C. Eimler, Laura Hoffmann, Sabrina Sobieraj, Stefan Maderwald, Nicole C. Krämer, and Matthias Brand. 2013. “Neural Correlates of Empathy Towards Robots.” 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
  • Rasmussen, Jens. 1982. “Human Errors. A Taxonomy for Describing Human Malfunction in Industrial Installations.” Journal of Occupational Accidents 4 (2-4): 311–333. https://doi.org/10.1016/0376-6349(82)90041-4.
  • Reason, James. 1990. Human Error. Cambridge: Cambridge University Press.
  • Riek, Laurel D., Tal-Chen Rabinowitch, Bhismadev Chakrabarti, and Peter Robinson. 2009. “How Anthropomorphism Affects Empathy Toward Robots.” Proceedings of the 4th ACM/IEEE international conference on human Robot Interaction.
  • Saleh, Amin, Louis J. Zmich, Barry J. Babin, and Aadel A. Darrat. 2023. “Customer Responses to Service Providers’ Touch: A Meta-Analysis.” Journal of Business Research 166: 114113. https://doi.org/10.1016/j.jbusres.2023.114113.
  • Sharma, Tripti Ghosh, Juho Hamari, Ankit Kesharwani, and Preeti Tak. 2020. “Understanding Continuance Intention to Play Online Games: Roles of Self-Expressiveness, Self-Congruity, Self-Efficacy, and Perceived Risk.” Behaviour & Information Technology 41 (2): 348–364. https://doi.org/10.1080/0144929X.2020.1811770.
  • Shaw, Patrick, Robert LaRose, and Christina Wirth. 2006. “Reaching New Levels in Massively Multiplayer Online Games: A Social Cognitive Theory of MMO Usage.” Annual Meeting of the International Communication Association, Dresden, Germany.
  • Siino, Rosanne, and Pamela Hinds. 2005. “Robots, Gender & Sensemaking: Sex Segregation’s Impact on Workers Making Sense of a Mobile Autonomous Robot.” In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 2773–2778. IEEE. https://doi.org/10.1109/ROBOT.2005.1570533.[Q13]
  • Smith, Savannah. 2022. “Students Arrested, Accused of Throwing UT Robot, Causing Over $5,000 in Damage.” WVLT8. https://www.wvlt.tv/2022/05/05/students-face-charges-after-throwing-ut-robot-causing-over-5000-damage/.
  • Smith, Amy K., Ruth N. Bolton, and Janet Wagner. 1999. “A Model of Customer Satisfaction with Service Encounters Involving Failure and Recovery.” Journal of Marketing Research 36 (3): 356. https://doi.org/10.1177/002224379903600305.
  • Spaccatini, Federica, Giulia Corlito, and Simona Sacchi. 2023. “New Dyads? The Effect of Social Robots’ Anthropomorphization on Empathy Towards Human Beings.” Computers in Human Behavior 146: 107821. https://doi.org/10.1016/j.chb.2023.107821.
  • Spatola, Nicolas, Olga A. Wudarczyk, Tatsuya Nomura, and Emna Cherif. 2023. “Attitudes Towards Robots Measure (ARM): A New Measurement Tool Aggregating Previous Scales Assessing Attitudes Toward Robots.” International Journal of Social Robotics 15 (9-10): 1683–1701. https://doi.org/10.1007/s12369-023-01056-3.
  • Suzuki, Yutaka, Lisa Galli, Ayaka Ikeda, Shoji Itakura, and Michiteru Kitazaki. 2015. “Measuring Empathy for Human and Robot Hand Pain Using Electroencephalography.” Scientific Reports 5 (1): 15924. https://doi.org/10.1038/srep15924.
  • Swiderska, Aleksandra, and Dennis Küster. 2020. “Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism.” Cognitive Science 44 (7): e12872. https://doi.org/10.1111/cogs.12872.
  • Takayama, Leila, and Doug Dooley. 2011. Expressing Thought: Improving Robot Readability with Animation Principles.
  • Tanibe, Tetsushi, Takaaki Hashimoto, and Kaori Karasawa. 2017. “We Perceive a Mind in a Robot When we Help it.” PLoS One 12 (7): e0180952. https://doi.org/10.1371/journal.pone.0180952.
  • Van Vaerenbergh, Yves, Chiara Orsingher, Iris Vermeir, and Bart Larivière. 2014. “A Meta-Analysis of Relationships Linking Service Failure Attributions to Customer Outcomes.” Journal of Service Research 17 (4): 381–398. https://doi.org/10.1177/1094670514538321.
  • Vu, Hong Tien, and Jeongsub Lim. 2022. “Effects of Country and Individual Factors on Public Acceptance of Artificial Intelligence and Robotics Technologies: A Multilevel SEM Analysis of 28-Country Survey Data.” Behaviour & Information Technology 41 (7): 1515–1528. https://doi.org/10.1080/0144929X.2021.1884288.
  • Wan, Lisa C., and Robert S. Wyer. 2019. “The Influence of Incidental Similarity on Observers’ Causal Attributions and Reactions to a Service Failure.” Journal of Consumer Research 45 (6): 1350–1368. https://doi.org/10.1093/jcr/ucy050.
  • Wang, Cheng, Jennifer Harris, and Paul Patterson. 2013. “The Roles of Habit, Self-Efficacy, and Satisfaction in Driving Continued Use of Self-Service Technologies.” Journal of Service Research 16 (3): 400–414. https://doi.org/10.1177/1094670512473200.
  • Ward, Adrian, Andrew Olsen, and Daniel Wegner. 2013. “The Harm-Made Mind: Observing Victimization Augments Attribution of Minds to Vegetative Patients, Robots, and the Dead.” Psychological Science 24. https://doi.org/10.1177/0956797612472343.
  • Webb, Andrea, and Joann Peck. 2014. “Individual Differences in Interpersonal Touch: On the Development, Validation, and use of the “Comfort with Interpersonal Touch” (CIT) Scale.” Journal of Consumer Psychology 25 (1): 60–77. https://doi.org/10.1016/j.jcps.2014.07.002.
  • Whitby, Blay. 2008. “Sometimes It’s Hard to be a Robot: A Call for Action on the Ethics of Abusing Artificial Agents.” Interacting with Computers 20 (3): 326–333. https://doi.org/10.1016/j.intcom.2008.02.002.
  • Wirtz, Jochen, Paul G. Patterson, Werner H. Kunz, Thorsten Gruber, Vinh Nhat Lu, Stefanie Paluch, and Antje Martins. 2018. “Brave New World: Service Robots in the Frontline.” Journal of Service Management 29 (5): 907–931. https://doi.org/10.1108/JOSM-04-2018-0119.
  • Wood, Joanne V., Sara A. Heimpel, Laurie A. Manwell, and Elizabeth J. Whittington. 2009. “This Mood is Familiar and I Don’t Deserve to Feel Better Anyway: Mechanisms Underlying Self-Esteem Differences in Motivation to Repair Sad Moods.” Journal of Personality and Social Psychology 96 (2): 363–380. https://doi.org/10.1037/a0012881.