REFERENCES

  • Angelos, P. 2010. The ethical challenges of surgical innovation for patient care. The Lancet 376 (9746):1046–7. doi: 10.1016/s0140-6736(10)61474-2.
  • Angus, D. C. 2020. Randomized clinical trials of artificial intelligence. JAMA 323 (11):1043–5. doi: 10.1001/jama.2020.1039.
  • Babic, B., S. Gerke, T. Evgeniou, and I. G. Cohen. 2019. Algorithms on regulatory lockdown in medicine. Science 366 (6470):1202–4. doi: 10.1126/science.aay9547.
  • Baily, M. A., M. Bottrell, J. Lynn, and B. Jennings. 2006. The ethics of using QI methods to improve health care quality and safety. The Hastings Center Report 36 (4):S1–S40. doi: 10.1353/hcr.2006.0054.
  • Bjerring, J. C., and J. Busch. 2021. Artificial intelligence and patient-centered decision-making. Philosophy & Technology 34 (2):349–71. doi: 10.1007/s13347-019-00391-6.
  • Brody, H., and F. G. Miller. 2013. The research-clinical practice distinction, learning health systems, and relationships. The Hastings Center Report 43 (5):41–7. doi: 10.1002/hast.199.
  • Broekman, M. L., M. E. Carrière, and A. L. Bredenoord. 2016. Surgical innovation: The ethical agenda. Medicine 95 (25):e37980. doi: 10.1097/MD.0000000000003790.
  • Burrell, J. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 3 (1). doi: 10.1177/2053951715622512.
  • Char, D. S., N. H. Shah, and D. Magnus. 2018. Implementing machine learning in health care—addressing ethical challenges. The New England Journal of Medicine 378 (11):981–3. doi: 10.1056/NEJMp1714229.
  • Churchill, L. R. 1980. Physician-investigator/patient-subject: Exploring the logic and the tension. The Journal of Medicine and Philosophy 5 (3):215–24. doi: 10.1093/jmp/5.3.215.
  • Cook, M. J., T. J. O'Brien, S. F. Berkovic, M. Murphy, A. Morokoff, G. Fabinyi, W. D'Souza, R. Yerra, J. Archer, L. Litewka, et al. 2013. Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: A first-in-man study. The Lancet Neurology 12 (6):563–71. doi: 10.1016/S1474-4422(13)70075-9.
  • Council for International Organizations of Medical Sciences (CIOMS). 2016. International ethical guidelines for health-related research involving humans. 4th ed. Geneva: Council for International Organizations of Medical Sciences (CIOMS).
  • Cruz Rivera, S., X. Liu, A.-W. Chan, A. K. Denniston, and M. J. Calvert, The SPIRIT-AI and CONSORT-AI Working Group. 2020. Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT-AI extension. The Lancet Digital Health 2 (10):e549–e560. doi: 10.1016/S2589-7500(20)30219-3.
  • Esteva, A., A. Robicquet, B. Ramsundar, V. Kuleshov, M. DePristo, K. Chou, C. Cui, G. S. Corrado, S. Thrun, and J. Dean. 2019. A guide to deep learning in healthcare. Nature Medicine 25 (1):24–9. doi: 10.1038/s41591-018-0316-z.
  • Evans, E. L., and D. Whicher. 2018. What should oversight of clinical decision support systems look like? AMA Journal of Ethics 20 (9):E857–E863.
  • Faden, R. R., N. E. Kass, S. N. Goodman, P. Pronovost, S. Tunis, and T. L. Beauchamp. 2013. An ethics framework for a learning health care system: A departure from traditional research ethics and clinical ethics. The Hastings Center Report 43 (s1):S16–S27. doi: 10.1002/hast.134.
  • Finkelstein, J. A., A. L. Brickman, A. Capron, D. E. Ford, A. Gombosev, S. M. Greene, R. P. Iafrate, L. Kolaczkowski, S. C. Pallin, M. J. Pletcher, et al. 2015. Oversight on the borderline: Quality improvement and pragmatic research. Clinical Trials 12 (5):457–66. doi: 10.1177/1740774515597682.
  • Food and Drug Administration (FDA). 2019. Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) – discussion paper and request for feedback. Silver Spring, MD: US Food & Drug Administration.
  • Food and Drug Administration (FDA). 2021. Artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) action plan. Silver Spring, MD: US Food & Drug Administration.
  • Food and Drug Administration (FDA). 2022. Clinical decision support software: Guidance for industry and Food and Drug Administration staff. Silver Spring, MD: US Food & Drug Administration.
  • Futoma, J., M. Simons, T. Panch, F. Doshi-Velez, and L. A. Celi. 2020. The myth of generalisability in clinical research and machine learning in health care. The Lancet Digital Health 2 (9):e489–e492. doi: 10.1016/S2589-7500(20)30186-2.
  • Genin, K., and T. Grote. 2021. Randomized controlled trials in medical AI: A methodological critique. Philosophy of Medicine 2 (1):1–15. doi: 10.5195/pom.2021.27.
  • Gerke, S., B. Babic, T. Evgeniou, and I. G. Cohen. 2020. The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. NPJ Digital Medicine 3 (1):53. doi: 10.1038/s41746-020-0262-2.
  • Grote, T. 2022. Randomised controlled trials in medical AI: Ethical considerations. Journal of Medical Ethics 48 (11):899–906. doi: 10.1136/medethics-2020-107166.
  • Grote, T., and P. Berens. 2020. On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics 46 (3):205–11. doi: 10.1136/medethics-2019-105586.
  • Hatherley, J., and R. Sparrow. 2023. Diachronic and synchronic variation in the performance of adaptive machine learning systems: The ethical challenges. Journal of the American Medical Informatics Association 30 (2):361–6. doi: 10.1093/jamia/ocac218.
  • Hatherley, J., R. Sparrow, and M. Howard. 2022. The virtues of interpretable medical artificial intelligence. Cambridge Quarterly of Healthcare Ethics. Published online, 16 December 2022. doi: 10.1017/S0963180122000305.
  • Jia, Z., Z. Wang, F. Hong, L. Ping, Y. Shi, and J. Hu. 2020. Personalized deep learning for ventricular arrhythmias detection on medical IoT systems. In: ICCAD ‘20: Proceedings of the 39th International Conference on Computer-Aided Design, November 2-5, 2020, Virtual Event, USA, 1–9. New York, NY: Association for Computing Machinery. doi: 10.1145/3400302.3415774.
  • Jordan, M. I., and T. M. Mitchell. 2015. Machine learning: Trends, perspectives, and prospects. Science 349 (6245):255–60. doi: 10.1126/science.aaa8415.
  • Kass, N. E., R. R. Faden, S. N. Goodman, P. Pronovost, S. Tunis, and T. L. Beauchamp. 2013. The research‐treatment distinction: A problematic approach for determining which activities should have ethical oversight. The Hastings Center Report 43 (s1):S4–S15. doi: 10.1002/hast.133.
  • Katz, M. L., and C. Shapiro. 1985. Network externalities, competition, and compatibility. The American Economic Review 75 (3):424–40.
  • King, N. M. P., and L. R. Churchill. 2011. Assessing and comparing potential benefits and risks of harm. In The Oxford textbook of clinical research ethics, eds. E. J. Emanuel, C. C. Grady, R. A. Crouch, R. K. Lie, F. G. Miller, and D. D. Wendler, 514–526. Oxford: Oxford University Press.
  • Kirkpatrick, J., R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America 114 (13):3521–6. doi: 10.1073/pnas.1611835114.
  • Largent, E. A., F. G. Miller, and S. Joffe. 2013. A prescription for ethical learning. The Hastings Center Report 43 (s1):S28–S29. doi: 10.1002/hast.135.
  • Li, J., L. Jin, Z. Wang, Q. Peng, Y. Wang, J. Luo, J. Zhou, Y. Cao, Y. Zhang, M. Zhang, et al. 2023. Towards precision medicine based on a continuous deep learning optimization and ensemble approach. NPJ Digital Medicine 6 (1):18. doi: 10.1038/s41746-023-00759-1.
  • Litton, P., and F. Miller. 2005. A normative justification for distinguishing the ethics of clinical research from the ethics of medical care. The Journal of Law, Medicine & Ethics 33 (3):566–74. doi: 10.1111/j.1748-720x.2005.tb00519.x.
  • Liu, X., S. Cruz Rivera, D. Moher, M. J. Calvert, and A. K. Denniston, the SPIRIT-AI and CONSORT-AI Working Group. 2020. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The CONSORT-AI extension. The Lancet Digital Health 2 (10):e537–e548. doi: 10.1016/S2589-7500(20)30218-1.
  • Lyell, D., E. Coiera, J. Chen, P. Shah, and F. Magrabi. 2021. How machine learning is embedded to support clinician decision making: An analysis of FDA-approved medical devices. BMJ Health & Care Informatics 28 (1):e100301. doi: 10.1136/bmjhci-2020-100301.
  • McCradden, M., J. A. Anderson, E. A. Stephenson, E. Drysdale, L. Erdman, A. Goldenberg, and R. Z. Shaul. 2022. A research ethics framework for the clinical translation of healthcare machine learning. The American Journal of Bioethics 22 (5):8–22. doi: 10.1080/15265161.2021.2013977.
  • McCradden, M. D., E. A. Stephenson, and J. A. Anderson. 2020. Clinical research underlies ethical integration of healthcare artificial intelligence. Nature Medicine 26 (9):1325–6. doi: 10.1038/s41591-020-1035-9.
  • Miller, F. G., and H. Brody. 2007. Clinical equipoise and the incoherence of research ethics. The Journal of Medicine and Philosophy 32 (2):151–65. doi: 10.1080/03605310701255750.
  • National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (NCPHSBBR). 1978. The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Washington: Department of Health, Education, and Welfare.
  • Oakley, J. 2019. Virtues in research ethics: Developing an empirically-informed account of virtues in biomedical research practice. In Beyond autonomy: Limits and alternatives to informed consent in research ethics and law, ed. D. G. Kirchhoffer and B. J. Richards, 133–149. Cambridge: Cambridge University Press.
  • Olsen, L. A., D. Aisner, and J. M. McGinnis. 2007. The learning healthcare system. Washington, DC: National Academies Press.
  • Ong, C. S., E. Reinertsen, H. Sun, P. Moonsamy, N. Mohan, M. Funamoto, T. Kaneko, P. S. Shekar, S. Schena, J. S. Lawton, et al. 2021. Prediction of operative mortality for patients undergoing cardiac surgical procedures without established risk scores. The Journal of Thoracic and Cardiovascular Surgery 165 (4):1449.e15–1459.e15. doi: 10.1016/j.jtcvs.2021.09.010.
  • Park, Y., G. Purcell Jackson, M. A. Foreman, D. Gruen, J. Hu, and A. K. Das. 2020. Evaluating artificial intelligence in medicine: Phases of clinical research. JAMIA Open 3 (3):326–31. doi: 10.1093/jamiaopen/ooaa033.
  • Pinto, M. F., A. Leal, F. Lopes, A. Dourado, P. Martins, and C. A. Teixeira. 2021. A personalized and evolutionary algorithm for interpretable EEG epilepsy seizure prediction. Scientific Reports 11 (1):3415. doi: 10.1038/s41598-021-82828-7.
  • Porumb, M., S. Stranges, A. Pescapè, and L. Pecchia. 2020. Precision medicine and artificial intelligence: A pilot study on deep learning for hypoglycemic events detection based on ECG. Scientific Reports 10 (1):170. doi: 10.1038/s41598-019-56927-5.
  • Price, W. N., and I. G. Cohen. 2019. Privacy in the age of medical big data. Nature Medicine 25 (1):37–43. doi: 10.1038/s41591-018-0272-7.
  • Rajczi, A. 2004. Making risk-benefit assessments of medical research protocols. The Journal of Law, Medicine & Ethics 32 (2):338–48, 192. doi: 10.1111/j.1748-720x.2004.tb00480.x.
  • Rajkomar, A., J. Dean, and I. Kohane. 2019. Machine learning in medicine. The New England Journal of Medicine 380 (14):1347–58. doi: 10.1056/NEJMra1814259.
  • Rajpurkar, P., E. Chen, O. Banerjee, and E. J. Topol. 2022. AI in health and medicine. Nature Medicine 28 (1):31–8. doi: 10.1038/s41591-021-01614-0.
  • Rieke, N., J. Hancox, W. Li, F. Milletarì, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A. Landman, K. Maier-Hein, et al. 2020. The future of digital health with federated learning. NPJ Digital Medicine 3 (1):119. doi: 10.1038/s41746-020-00323-1.
  • Rogers, W., K. Hutchison, and A. McNair. 2019. Ethical issues across the IDEAL stages of surgical innovation. Annals of Surgery 269 (2):229–33. doi: 10.1097/SLA.0000000000003106.
  • Rowell, D., and L. B. Connelly. 2012. A history of the term ‘moral hazard’. Journal of Risk and Insurance 79 (4):1051–75. doi: 10.1111/j.1539-6975.2011.01448.x.
  • Shortliffe, E. H., and M. J. Sepúlveda. 2018. Clinical decision support in the era of artificial intelligence. JAMA 320 (21):2199–200. doi: 10.1001/jama.2018.17163.
  • Sparrow, R., and J. Hatherley. 2019. The promise and perils of AI in medicine. International Journal of Chinese & Comparative Philosophy of Medicine 17 (2):79–109. doi: 10.24112/ijccpm.171678.
  • Sparrow, R., and J. Hatherley. 2020. High hopes for ‘deep medicine’? AI, economics, and the future of care. The Hastings Center Report 50 (1):14–7. doi: 10.1002/hast.1079.
  • Suresh, H., and J. V. Guttag. 2019. A framework for understanding unintended consequences of machine learning. Accessed March 1, 2022. https://arxiv.org/abs/1901.10002.
  • Svensson, A. M., and F. Jotterand. 2022. Doctor ex machina: A critical assessment of the use of artificial intelligence in health care. The Journal of Medicine and Philosophy 47 (1):155–78. doi: 10.1093/jmp/jhab036.
  • Topol, E. J. 2020. Welcoming new guidelines for AI clinical research. Nature Medicine 26 (9):1318–20. doi: 10.1038/s41591-020-1042-x.
  • van de Ven, G. M., and A. S. Tolias. 2019. Three scenarios for continual learning. Accessed March 1, 2022. https://arxiv.org/abs/1904.07734.
  • Vayena, E., A. Blasimme, and I. G. Cohen. 2018. Machine learning in medicine: addressing ethical challenges. PLoS Medicine 15 (11):e1002689. doi: 10.1371/journal.pmed.1002689.
  • Vokinger, K. N., S. Feuerriegel, and A. S. Kesselheim. 2021. Continual learning in medical devices: FDA's action plan and beyond. The Lancet Digital Health 3 (6):e337–e338. doi: 10.1016/S2589-7500(21)00076-5.
  • Yu, S., F. Farooq, A. van Esbroeck, G. Fung, V. Anand, and B. Krishnapuram. 2015. Predicting readmission risk with institution-specific prediction models. Artificial Intelligence in Medicine 65 (2):89–96. doi: 10.1016/j.artmed.2015.08.005.