1,966
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Perceived Fairness of Human Managers Compared with Artificial Intelligence in Employee Performance Evaluation

, , , &
Pages 1039-1070 | Published online: 11 Dec 2023
 

ABSTRACT

Human managers are increasingly challenged by artificial intelligence (AI) technologies in performing managerial functions. We undertook a field experiment that used AI vis-à-vis human managers to perform structured, data-intensive evaluations of employee performance. We generate two sets of insights. First, employees considered AI to be both fairer and more accurate in evaluating their performance than the average human manager. Second, to catch up with AI, human managers’ fairness perceived by employees played a first-order role by (a) helping human managers, to a greater extent than those managers’ evaluation accuracy, to close the performance gap of the employees evaluated by them compared with that of those evaluated by AI, and (b) constraining the effect of human managers’ perceived accuracy of evaluations on employees’ performance. Thus, facing the competition from AI, it is all the more important for human managers to treat employees fairly and build positive interpersonal relationships with employees.

View correction statement:
Correction

Acknowledgment

The authors acknowledge the invaluable comments from the Editor and reviewers, as well as the anonymous company for sponsoring the field experiment and providing the data.

Disclosure statement

No potential conflict of interest was reported by the author(s).

SUPPLEMENTARY MATERIAL

Supplemental data for this article can be accessed online at https://doi.org/10.1080/07421222.2023.2267316

Notes

1. Casciaro and Lobo [Citation13] synthesized theories include the psychological theories of the structure of personality impressions, social cognition and social perception, and attitudes and interpersonal judgement; the sociological theory of group members’ impression of one another; the organizational theory of interpersonal trust.

2. Recent work in information systems provides evidence of actual application of AI in management, such as Bai et al. [Citation5] which used AI to examine how jobs can be allocated to warehouse employees.

3. The employees were not informed of other employees’ group assignments during the experiment. More importantly, they were required by the company not to share their own group assignments with other employees. It was also the company’s long-standing policy to maintain confidentiality on the training feedback each employee received, which applied before, during, and after the experiment. If indeed that employees shared the feedback they received during the experiment, this behavior would make it more difficult for us to find systematic performance differences between employees evaluated by AI and those by human managers, because employees receiving higher-quality feedback from AI, by sharing their higher-quality feedback with others, essentially helped to increase the performance of the latter. Nonetheless, we continue to find substantial and robust performance gap between employees evaluated by AI versus those evaluated by human managers. Thus, this possibility makes our tests more conservative, thus strengthening the interpretation of our results.

4. In the surveys, the employees were asked the following questions: (1) Do you think the [AI evaluation system/manager] will accurately evaluate the collection skills that you demonstrate this month? (2) Do you feel that the [AI evaluation system/manager] will hold bias against you in generating the evaluation of your collection skills this month? All answers are based on a 10-point Likert scale. Because of critical difference between field experiment and lab experiment, we were constrained by the company from using multiple survey instruments to measure the same theoretical construct.

5. Each manager was assigned to evaluate 90 employees, which means that the former listened to 900 calls in each three-day window. The calls were provided to the managers every day instead of all at once at the end of each three-day window, as discussed next. Therefore, each manager listened to approximately 300 calls per day, which is equivalent to their normal workload.

6. In the surveys, the employees were asked the following questions: (1) Do you think that the evaluation you received accurately reflected the collection skills that you demonstrated this month? (2) Do you feel that the [AI evaluation system/manager] held bias against you in generating the evaluation of your collection skills this month? All answers are based on a 10-point Likert scale. Because of critical difference between field experiment and lab experiment, we were constrained by the company from using multiple survey instruments to measure the same theoretical construct.

7. We report the perceived accuracy and fairness of each of the five managers in the Online Supplemental Appendix 3.

8. Customers were randomly assigned to employees. Although we are unable to generate randomization checks because customer data were strictly confidential such that the company were not able to share them with us, we have much confidence in random assignments of customers to employees because it had been a central, longstanding practice in the company (and its industry), irrespective of our experiment, to avoid employees’ complaints over unfair assignments and thus risking demoralizing employees.

9. In the Online Supplemental Appendix 4, we have also provided more evidence, based on the actual feedback data, that AI generated higher-quality evaluations than human managers, by pointing out more mistakes and suggesting more corrections of those mistakes.

10. Our estimation results are still consistent if, instead of using the raw collection amount, we use the difference in the collection amount between the previous month and the experiment month as the dependent variable. Details are reported in Online Supplemental Appendix 5.

11. The code for running the customized model is available upon request.

12. The direct effect of AI on Working Smarter (proxied by Improve Rate) is negative and significant, suggesting that, with the same level of feedback quality and perceptions of fairness and accuracy, employees achieve higher improve rate when guided by a human manager as opposed to AI. This result is consistent with previous research on algorithm aversion [Citation14, Citation22, Citation49] and indicates that, despite AI’s superior performance, employees may still possess some level of aversion to AI.

Additional information

Notes on contributors

Shaojun (Marco) Qin

Shaojun (Marco) Qin ([email protected]) is an Assistant Professor in the Department of Marketing and Supply Chain Management, Fox School of Business of Temple University. He earned his PhD in Business Administration with a specialization in Quantitative Marketing from the University of Minnesota. Dr. Qin’s research interests cover the areas of quantitative marketing and applied industrial organization, exploring the sources of complementarity in various business settings, and between products (B2C) and business relationships (B2B).

Nan Jia

Nan Jia ([email protected]; corresponding author) is Dean’s Associate Professor of Business Administration at Marshall School of Business, University of Southern California. She holds a PhD in Strategic Management from the Rotman School of Management, University of Toronto. Dr. Jia’s research interests include corporate political strategy, business-governance relationships, and applications of artificial intelligence technologies in management, and corporate governance in international business. Her research work has been published in multiple top journals in strategic management. She serves as an associate editor of Strategic Management Journal and on the editorial boards of several other leading academic journals.

Xueming Luo

Xueming Luo ([email protected]) is s Charles Gilliland Distinguished Chair Professor of Marketing, Professor of Strategic Management, and Professor of Management Information Systems at Temple University. He is the Founder/Director of the Global Institute for Artificial Intelligence and Business Analytics in the Fox School of Business at Temple University. Dr. Luo is interested in digital mobile marketing, omnichannel customer analytics, social responsibility with machine learning, artificial intelligence, engineering models, and big data field experiments. His current research focuses on sharing economy platform algorithms, unstructured audio/image/video data, and smart city analytics for personalized recommendations, promotions, competitive pricing, omnichannel, social media networks advertising, and customer equity metrics. His work has been published in most top-ranking journals in Marketing, Strategy, Information Systems, and Management.

Chengcheng Liao

Chengcheng Liao ([email protected]) is an Assistant Professor of Marketing and Information Systems at the Business School of Sichuan University, China, from which she received her PhD. Dr. Liao’s research interests include new generation of information technology and the application of artificial intelligence in business administration. Her work has appeared in Academy of Management Journal, Journal of Association for Information Systems, and other academic journals.

Ziyao Huang

Ziyao Huang ([email protected]) is an Assistant Professor of Marketing and Information Systems at the Business School of Sichuan University, from which she received her PhD. Dr. Huang focuses on the interdisciplinary research in artificial intelligence (AI) and business administration. Her research interests include AI marketing and multimodal machine learning of unstructured text/audio/image/video data.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 640.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.