ABSTRACT
Video evidence depicting physical altercations has polarized public opinions and courtroom decisions about social issues including race relations and police use-of-force, we believe, in part because of the way people process dynamic visual stimuli across repeated viewing opportunities. We reanalyzed two studies that covertly collected eye-tracking data to quantify and model visual confirmation bias (VCB) – the degree to which eye movements replicate previous patterns of looking across multiple viewing opportunities. We tracked the location of eye gaze when participants (N1 = 320; N2 = 212) watched the same video twice depicting an altercation between an officer and a civilian (Study 1) or a Black and a White actor (Study 2). In pilot tests, we provided evidence regarding the construct validity of statistical measures of concordance tracking similarities in where perceivers directed eye gaze across viewings as an index of VCB. In our pre-registered analytic plan, we used these metrics to probe for relationships with punishment decisions made about targets after the first and second viewings. Contrary to predictions, our pre-registered analyses found no associations between VCB, consistency, and polarization in punishment. We present exploratory analyses probing potential moderators of the association between VCB and these outcome measures. We offer practical suggestions for researchers measuring and modeling eye gaze during the presentation of dynamic stimuli across multiple viewings, particularly in the context of intergroup decision research.
Acknowledgments
We would like to thank the following research assistants for helping with data collection: Andressa Bonafe, Clare Brinkman, Tao Buck, Danielle Dgheim, Lucia Espineira, Sarah Field, Alexis George, Jeimmy Hurtado, Shabeba Islam, Jhenelle Marson, Beatrice Terino, Andie Youniss, and Tobias Zhou.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Supplementary material
Supplemental data for this article can be accessed here
Notes
1. We set the sample size to model other effects not tested here, in which we assumed a small effect size (f2 = .036, power = 90%) for a multiple linear regression model and the required sample size was 292, using a fixed model testing the increase in R2, with one tested predictor and three total predictors. However, we oversampled in anticipation of data loss due to technological challenges associated with tracking eye gaze for some individuals (e.g. colored contact lenses, thick glasses, etc).
2. Because the COVID pandemic shut down our access to the original experimenter log file, we were unaware of a note left by our experimenter at the time the Stage 1 submission was accepted; this note indicated that one additional participant relative to our original submission noted suspicion that we used eye-tracking. When our team regained in-person access to the original subject logs, we confirmed that the number of participants who were suspicious of our use of eye-tracking was in fact 5, instead of the originally reported 4. The number of individuals who our experimenters suspected of low-quality participation remained the same.
3. Sample papers using this method include Van Reekum et al. (Citation2007), Frutos-Pascual and Garcia-Zapirain (Citation2015), and L. Zhang et al. (Citation2015). This is also commonly used to impute missing pupil size values; see for example, Garon et al. (Citation2018), Sirois and Brisson (Citation2014), and Jin et al. (Citation2019).
4. Although the sensitivity analysis is part of the pre-registered analysis plan, we acknowledge that this technique is recommended for a priori power calculations (Faul et al., Citation2007; Lane & Hennes, Citation2019). We acknowledge that our effect sizes may not be accurate parameter estimates, and that they may vary dramatically in other investigations. We also acknowledge that sensitivity analyses fall prey to the same uncertainty limitations as individual power analyses. Due to the difficulty in interpreting post hoc sensitivity analyses and their inherent variability in estimation (Y. Zhang et al., Citation2019) and would ask readers to interpret all sensitivity analyses included in this paper with caution.