3,585
Views
5
CrossRef citations to date
0
Altmetric
Research Article

People’s reactions to decisions by human vs. algorithmic decision-makers: the role of explanations and type of selection tests

ORCID Icon, ORCID Icon, , , & ORCID Icon
Pages 146-157 | Received 01 Sep 2021, Accepted 01 Oct 2022, Published online: 27 Oct 2022

References

  • Acikgoz, Y., Davison, K. H., Compagnone, M., & Laske, M. (2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, 28(4), 399–416. https://doi.org/10.1111/ijsa.12306
  • Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
  • Blacksmith, N., Willford, J. C., & Behrend, T. S. (2016). Technology in the employment interview: A meta-analysis and future research agenda. Personnel Assessment and Decisions, 2(1), 12–20. https://doi.org/10.25035/pad.2016.002
  • Bolander, T. (2019). What do we loose when machines take the decisions? Journal of Management and Governance, 23(4), 849–867. https://doi.org/10.1007/s10997-019-09493-x
  • Brockner, J., Siegel, P. A., Daly, J. P., Tyler, T., & Martin, C. (1997). When trust matters: The moderating effect of outcome favorability. Administrative Science Quarterly, 42(3), 558. https://doi.org/10.2307/2393738
  • Burke, C. S., Sims, D. E., Lazzara, E. H., & Salas, E. (2007). Trust in leadership: A multi-level review and integration. The Leadership Quarterly, 18(6), 606–632. https://doi.org/10.1016/j.leaqua.2007.09.006
  • Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. https://doi.org/10.1177/0022243719851788
  • Dirks, K. T., & Ferrin, D. L. (2002). Trust in leadership: Meta-analytic findings and implications for research and practice. The Journal of Applied Psychology, 87(4), 611–628. https://doi.org/10.1037/0021-9010.87.4.611
  • Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2019). Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM. Human Resource Management Journal, 30(1), 114–132. https://doi.org/10.1111/1748-8583.12258
  • Eifler, S. (2007). Evaluating the validity of self-reported deviant behavior using vignette analyses. Quality & Quantity, 41(2), 303–318. https://doi.org/10.1007/s11135-007-9093-3
  • Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
  • Geiskkovitch, D. Y., Cormier, D., Seo, S. H., & Young, J. E. (2016). Please continue, we need more data: An exploration of obedience to robots. Journal of Human-Robot Interaction, 5(1), 82–99. https://doi.org/10.5898/JHRI.5.1.Geiskkovitch
  • Georgiou, K. (2021). Can explanations improve applicant reactions towards gamified assessment methods? International Journal of Selection and Assessment, 29(2), 253–268. https://doi.org/10.1111/ijsa.12329
  • Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review, 18(4), 694–734. https://doi.org/10.5465/amr.1993.9402210155
  • Grzymek, V., & Puntschuh, M. (2019). Was Europa über Algorithmen weiß und denkt: Ergebnisse einer repräsentativen Bevölkerungsumfrage [What Europe knows and thinks about algorithms: Results of a representative survey]. Bertelsmann Stiftung. https://doi.org/10.11586/2019006
  • Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI - explainable artificial intelligence. Science Robotics, 4(37), eaay7120. https://doi.org/10.1126/scirobotics.aay7120
  • Jarrahi, M. H., Newlands, G., Lee, M. K., Wolf, C. T., Kinder, E., & Sutherland, W. (2021). Algorithmic management in a work context. Big Data & Society, 8(2). https://doi.org/10.1177/20539517211020332
  • Langer, M., Baum, K., König, C. J., Hähne, V., Oster, D., & Speith, T. (2021). Spare me the details: How the type of information about automated interviews influences applicant reactions. International Journal of Selection and Assessment, 29(2), 154–169. https://doi.org/10.1111/ijsa.12325
  • Langer, M., König, C. J., & Andromachi, F. (2018). Information as a double-edged sword: The role of computer experience and information on applicant reactions towards novel technologies for personnel selection. Computers in Human Behavior, 81, 19–30. https://doi.org/10.1016/j.chb.2017.11.036
  • Langer, M., & Landers, R. N. (2021). The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior, 123, 106878. https://doi.org/10.1016/j.chb.2021.106878
  • Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., & Baum, K. (2021). What do we want from explainable artificial intelligence (XAI)? – a stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473. https://doi.org/10.1016/j.artint.2021.103473
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684
  • Liepmann, D., Beauducel, A., Brocke, B., & Amthauer, R. (2007). Intelligenz-struktur-test 2000 R (2nd ed.). Hogrefe.
  • Mayring, P. (2014). Qualitative content analysis: Theoretical foundation, basic procedures and software solution. SSOAR. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-395173
  • Mirowska, A., & Mesnet, L. (2021). Preferring the devil you know: Potential applicant reactions to artificial intelligence evaluation of interviews. Human Resource Management Journal. https://doi.org/10.1111/1748-8583.12393
  • Nagtegaal, R. (2021). The impact of using algorithms for managerial decisions on public employees’ procedural justice. Government Information Quarterly, 38(1), 101536. https://doi.org/10.1016/j.giq.2020.101536
  • Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149–167. https://doi.org/10.1016/j.obhdp.2020.03.008
  • Ötting, S. K., & Maier, G. W. (2018). The importance of procedural justice in human-machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89, 27–39. https://doi.org/10.1016/j.chb.2018.07.022
  • Parent-Rocheleau, X., & Parker, S. K. (2021). Algorithms as work designers: How algorithmic management influences the design of jobs. Human Resource Management Review, 100838. https://doi.org/10.1016/j.hrmr.2021.100838
  • Parry, K., Cohen, M., & Bhattacharya, S. (2016). Rise of the machines: A critical consideration of automated leadership decision making in organizations. Group & Organization Management, 41(5), 571–594. https://doi.org/10.1177/1059601116643442
  • Schaefer, K. E., Chen, J. Y., Szalma, J. L., & Hancock, P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377–400. https://doi.org/10.1177/0018720816634228
  • Truxillo, D. M., Bodner, T. E., Bertolino, M., Bauer, T. N., & Yonce, C. A. (2009). Effects of explanations on applicant reactions: A meta-analytic review. International Journal of Selection and Assessment, 17(4), 346–361. https://doi.org/10.1111/j.1468-2389.2009.00478.x
  • Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of leadership. Computers in Human Behavior, 101, 197–209. https://doi.org/10.1016/j.chb.2019.07.027
  • Wesche, J. S., & Sonderegger, A. (2021). Repelled at first sight? Expectations and intentions of job-seekers reading about AI selection in job advertisements. Computers in Human Behavior, 125, 106931. https://doi.org/10.1016/j.chb.2021.106931
  • Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2018). Transparency in algorithmic and human decision-making: Is there a double standard? Philosophy & Technology, 32(4), 661–683. https://doi.org/10.1007/s13347-018-0330-6