1,132
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Justice, trust, and moral judgements when personnel selection is supported by algorithms

ORCID Icon, ORCID Icon, & ORCID Icon
Pages 130-145 | Received 03 Jan 2022, Accepted 11 Jan 2023, Published online: 20 Feb 2023

References

  • Acikgoz, Y., Davison, K. H., Compagnone, M., & Laske, M. (2020). Justice perceptions of artificial intelligence in selection. International Journal of Selection and Assessment, 28(4), 399–416. https://doi.org/10.1111/ijsa.12306
  • Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Atoyan, H., Duquet, J.-R., & Robert, J.-M. (2006). Trust in new decision aid systems. Proceedings of the 18th International Conference on Association Francophone d’Interaction Homme-Machine - IHM ’06, 115–122. https://doi.org/10.1145/1132736.1132751
  • Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34. https://doi.org/10.1016/j.cognition.2018.08.003
  • Bigman, Y. E., Wilson, D., Arnestad, M., Waytz, A., & Gray, K. (2022). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General. Advance online publication. https://doi.org/10.1037/xge0001250
  • Bonaccio, S., & Dalal, R. S. (2006). Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. Organizational Behavior and Human Decision Processes, 101(2), 127–151. https://doi.org/10.1016/j.obhdp.2006.07.001
  • Burton, J. W., Stein, M., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220–239. https://doi.org/10.1002/bdm.2155
  • Cheng, M. M., & Hackett, R. D. (2021). A critical review of algorithms in HRM: Definition, theory, and practice. Human Resource Management Review, 31(1), 100698. https://doi.org/10.1016/j.hrmr.2019.100698
  • Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386–400. https://doi.org/10.1037/0021-9010.86.3.386
  • Colquitt, J. A., & Rodell, J. B. (2011). Justice, trust, and trustworthiness: A longitudinal analysis integrating three theoretical perspectives. Academy of Management Journal, 54(6), 1183–1206. https://doi.org/10.5465/amj.2007.0572
  • Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
  • Daugherty, P. R., & Wilson, H. J. (2018). Human + machine: Reimagining work in the age of AI. Harvard Business Press.
  • Dietvorst, B. J., & Bartels, D. M. (2022). Consumers object to algorithms making morally relevant tradeoffs because of algorithms’ consequentialist decision strategies. Journal of Consumer Psychology, 32(3), 406–424. https://doi.org/10.1002/jcpy.1266
  • Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
  • Dijkstra, J. J., Liebrand, W. B. G., & Timminga, E. (1998). Persuasiveness of expert systems. Behaviour & Information Technology, 17(3), 155–163. https://doi.org/10.1080/014492998119526
  • Eastwood, J., Snook, B., & Luther, K. (2012). What people want from their professionals: Attitudes toward decision-making strategies. Journal of Behavioral Decision Making, 25(5), 458–468. https://doi.org/10.1002/bdm.741
  • Elsbach, K. D., & Stigliani, I. (2019). New information technology and implicit bias. Academy of Management Perspectives, 33(2), 185–206. https://doi.org/10.5465/amp.2017.0079
  • Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. https://doi.org/10.3758/brm.41.4.1149
  • Fiske, S. T. (1998). Stereotyping, prejudice, and discrimination. In Gilbert, D. T., Fiske, S. T. & Lindzey, G. (Eds.), The handbook of social psychology (Vols. 1-2, pp. 357–411). McGraw-Hill.
  • Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review, 18(4), 694–734. https://doi.org/10.2307/258595
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Gonzalez, M. F., Capman, J. F., Oswald, F. L., Theys, E. R., & Tomczak, D. L. (2019). “Where’s the I-O?” Artificial intelligence and machine learning in talent management systems. Personnel Assessment and Decisions, 5(3), 5. https://doi.org/10.25035/pad.2019.03.005
  • Grzymek, V., & Puntschuh, M. (2019). What Europe knows and thinks about algorithms: Results of a representative survey ( Discussion Paper Ethics of Algorithms #10). Bertelsmann Stiftung eupinions. http://aei.pitt.edu/102582/
  • Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Personnel Psychology, 57(3), 639–683. https://doi.org/10.1111/j.1744-6570.2004.00003.x
  • Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors: The Journal of the Human Factors and Ergonomics Society, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Huang, H.-H., Hsu, J.S.-C., & Ku, C.-Y. (2012). Understanding the role of computer-mediated counter-argument in countering confirmation bias. Decision Support Systems, 53(3), 438–447. https://doi.org/10.1016/j.dss.2012.03.009
  • Jago, A. S. (2019). Algorithms and authenticity. Academy of Management Discoveries, 5(1), 38–56. https://doi.org/10.5465/amd.2017.0002
  • Jussupow, E., Benbasat, I., & Heinzl, A. (2020). Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. ECIS 2020 Research Papers. https://aisel.aisnet.org/ecis2020_rp/168
  • Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In Goslin, D. A. (Ed.), Handbook of socialization theory and research (pp. 347–480). Rand McNally.
  • Konovsky, M. A., & Cropanzano, R. (1991). Perceived fairness of employee drug testing as a predictor of employee attitudes and job performance. Journal of Applied Psychology, 76(5), 698–707. https://doi.org/10.1037/0021-9010.76.5.698
  • Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: Review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388–409. https://doi.org/10.1080/0960085X.2021.1927212
  • Langer, M., König, C. J., & Papathanasiou, M. (2019). Highly-automated job interviews: Acceptance under the influence of stakes. International Journal of Selection and Assessment, 27(3), 217–234. https://doi.org/10.1111/ijsa.12246
  • Lawler, J. J., & Elliot, R. (1996). Artificial intelligence in HRM: An experimental study of an expert system. Journal of Management, 22(1), 85–111. https://doi.org/10.1177/014920639602200104
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50.30392
  • Madsen, M., & Gregor, S. (2000). Measuring human-computer trust. Paper presented at the 11th Australasian Conference on Information Systems, Queensland University of Technology, Brisbane.
  • Maier, G. W., Streicher, B., Jonas, E., & Woschée, R. (2007). Gerechtigkeitseinschätzungen in Organisationen: Die Validität einer deutschsprachigen Fassung des Fragebogens von Colquitt (2001) [Assessment of justice in organizations: The validity of a German version of the questionnaire by Colquitt (2001)]. Diagnostica, 53(2), 97–108. https://doi.org/10.1026/0012-1924.53.2.97
  • Marcinkowski, F., Kieslich, K., Starke, C., & Lünich, M. (2020). Implications of AI (un-)fairness in higher education admissions: The effects of perceived AI (un-)fairness on exit, voice and organizational reputation. Proceedings of the 2020 FAT* Conference on Fairness, Accountability, and Transparency, 122–130. https://doi.org/10.1145/3351095.3372867
  • Mayring, P. (2010). Qualitative Inhaltsanalyse [Qualitative content analysis]. Beltz Verlag.
  • McAllister, D. J. Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. (1995). Academy of Management Journal, 38(1), 24–59. https://doi.org/10.2307/256727
  • Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437–455. https://doi.org/10.1037/a0028085
  • Mosier, K. L., & Manzey, D. (2019). Humans and automated decision aids: A match made in heaven? In M. Mouloua, P. A. Hancock, & J. Ferraro (Eds.), Human performance in automated and autonomous systems (pp. 19–42). CRC Press. https://doi.org/10.1201/9780429458330-2
  • Nagtegaal, R. (2021). The impact of using algorithms for managerial decisions on public employees’ procedural justice. Government Information Quarterly, 38(1), 101536. https://doi.org/10.1016/j.giq.2020.101536
  • Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160, 149–167. https://doi.org/10.1016/j.obhdp.2020.03.008
  • Oswald, F. L., Behrend, T. S., Putka, D. J., & Sinar, E. (2020). Big data in industrial-organizational psychology and human resource management: Forward progress for organizational research and practice. Annual Review of Organizational Psychology and Organizational Behavior, 7(1), 505–533. https://doi.org/10.1146/annurev-orgpsych-032117-104553
  • Ötting, S. K., & Maier, G. W. (2018). The importance of procedural justice in human–machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89, 27–39. https://doi.org/10.1016/j.chb.2018.07.022
  • Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. Proceedings of the 2020 FAT* Conference on Fairness, Accountability, and Transparency, 469–481. https://doi.org/10.1145/3351095.3372828
  • Raisch, S., & Krakoswki, S. (2020). Artificial intelligence and management: The automation-augmentation paradox. Academy of Management Review, 46(1), 192–210. https://doi.org/10.5465/amr.2018.0072
  • Schlicker, N., Langer, M., Ötting, S. K., Baum, K., König, C. J., & Wallach, D. (2021). What to expect from opening up ‘black boxes’? Comparing perceptions of justice between human and automated agents. Computers in Human Behavior, 122, 106837. https://doi.org/10.1016/j.chb.2021.106837
  • Skitka, L. J., Mosier, K., & Burdick, M. D. (2000). Accountability and automation bias. International Journal of Human-Computer Studies, 52(4), 701–717. https://doi.org/10.1006/ijhc.1999.0349
  • Sniezek, J. A., & Van Swol, L. M. (2001). Trust, confidence, and expertise in a judge-advisor system. Organizational Behavior and Human Decision Processes, 84(2), 288–307. https://doi.org/10.1006/obhd.2000.2926
  • Statistisches Bundesamt. (2019). Bildung und Kultur 2018 [Education and culture 2018]. Fachserie/11/3. https://www.statistischebibliothek.de/mir/receive/DEHeft_mods_00128345
  • Suen, H.-Y., Chen, M.Y.-C., & Lu, S.-H. (2019). Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes? Computers in Human Behavior, 98, 93–101. https://doi.org/10.1016/j.chb.2019.04.012
  • Tanner, C., Medin, D. L., & Iliev, R. (2008). Influence of deontological versus consequentialist orientations on act choices and framing effects: When principles are more important than consequences. European Journal of Social Psychology, 38(5), 757–769. https://doi.org/10.1002/ejsp.493
  • Van Swol, L. M., & Sniezek, J. A. (2005). Factors affecting the acceptance of expert advice. British Journal of Social Psychology, 44(3), 443–461. https://doi.org/10.1348/014466604X17092
  • Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of leadership. Computers in Human Behavior, 101, 197–209. https://doi.org/10.1016/j.chb.2019.07.027
  • Xu, Z. X., & Ma, H. K. (2016). How can a deontological decision lead to moral behavior? The moderating role of moral identity. Journal of Business Ethics, 137(3), 537–549. https://doi.org/10.1007/s10551-015-2576-6
  • Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, 32(4), 403–414. https://doi.org/10.1002/bdm.2118

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.