3,585
Views
5
CrossRef citations to date
0
Altmetric
Research Article

People’s reactions to decisions by human vs. algorithmic decision-makers: the role of explanations and type of selection tests

ORCID Icon, ORCID Icon, , , & ORCID Icon
Pages 146-157 | Received 01 Sep 2021, Accepted 01 Oct 2022, Published online: 27 Oct 2022
 

ABSTRACT

Research suggests that people prefer human over algorithmic decision-makers at work. Most of these studies, however, use hypothetical scenarios and it is unclear whether such results replicate in more realistic contexts. We conducted two between-subjects studies (N=270; N=183) in which the decision-maker (human vs. algorithmic, Study 1 and 2), explanations regarding the decision- process (yes vs. no, Study 1 and 2), and the type of selection test (requiring human vs. mechanical skills for evaluation, Study 2) were manipulated. While Study 1 was based on a hypothetical scenario, participants in pre-registered Study 2 volunteered to participate in a qualifying session for an attractively remunerated product test, thus competing for real incentives. In both studies, participants in the human condition reported higher levels of trust and acceptance. Providing explanations also positively influenced trust, acceptance, and perceived transparency in Study 1, while it did not exert any effect in Study 2. Type of the selection test affected fairness ratings, with higher ratings for tests requiring human vs. mechanical skills for evaluation. Results show that algorithmic decision-making in personnel selection can negatively impact trust and acceptance both in studies with hypothetical scenarios as well as studies with real incentives.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. All materials of both studies (instructions and items in both English and German) as well as all data and quantitative as well as qualitative analyses are documented in the corresponding project folder on the OpenScienceFramework (https://osf.io/hxwpr/). Study 2 was preregistered on OSF. Both studies obtained ethical approval (Internal Review Board University of Fribourg, IRB_520, Ethics Committee of the Department of Education and Psychology of the Free University of Berlin, Nr. 041.2019).

2. When coders assigned a qualitative response to more than one category and this resulted in an uneven number of category assignments for this response between both coders, the non-overlapping category assignments were dropped for the analysis of the inter-rater-reliability.

3. Upon completion of the study, participants were debriefed about the true purpose of the study, that the alleged qualifying session was in fact the actual study, and that the product test would not take place. Moreover, they were informed that instead of receiving 50 EUR for participation in the product test, five participants were determined by lottery that received 50 EUR.