32
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Moral Engagement and Disengagement in Health Care AI Development

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon &
Published online: 08 Apr 2024
 

Abstract

Background

Machine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.

Methods

We conducted 40 semi-structured interviews with developers of ML predictive analytics applications for health care in the United States.

Results

Participants varied widely in their perspectives on personal responsibility and included examples of both moral engagement and disengagement, albeit in a variety of forms. While most (70%) of participants made a statement indicative of moral engagement, most of these statements reflected an awareness of moral issues, while only a subset of these included additional elements of engagement such as recognizing responsibility, alignment with personal values, addressing conflicts of interests, and opportunities for action. Further, we identified eight distinct categories of moral disengagement reflecting efforts to minimize potential harms or deflect personal responsibility for preventing or mitigating harms.

Conclusions

These findings suggest possible facilitators and barriers to the development of ethical ML that could act by encouraging moral engagement or discouraging moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.

Acknowledgements

This work was supported by grants from The Greenwall Foundation and the National Institutes of Health (R01HG010476). C.A.F. was supported on a training grant from the National Institutes of Health (T32 HG008953).

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by grants from The Greenwall Foundation and the National Institutes of Health (R01HG010476). C.A.F. was supported on a training grant from the National Institutes of Health (T32 HG008953).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 137.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.