Autonomy
Autonomy is defined by Veatch et al. as “self-legislating, the ability to make decisions based on personal values, preferences and sense of self. The removal of barriers to informed and voluntary decision making.”15
Perhaps the most obvious example of where the principle of autonomy butts against AI in medicine is patient privacy—the ownership of health information and consent for its use.
My home institution’s version of a Patient Bill-of-Rights states, “Your medical information will be kept confidential. In general, we will only share it with others if you give us permission or as otherwise permitted by law.”16 This law is the Health Insurance Portability and Accountability Act of 1996 (HIPAA), with a subsequently added Privacy Rule, which, among other provisions, allows healthcare providers to use identifiable protected health information (PHI) for research purposes, with or without consent, as deemed appropriate by an institutional review board (IRB).17 This provision has governed a vast amount of research conducted by institutions over the past 25 years, advancing the field of medicine. But the Privacy Rule explicitly does not apply to de-identified health information, the ownership of which is undecided throughout much of the U.S. As of 2015, several states assign ownership rights to the healthcare facility, with only New Hampshire assigning ownership to the patient and over half of states without a comparable law.18
Patients in the U.S. may be unaware that their individual health information, while anonymized, can be included in the large datasets used to train, validate and test ML algorithms under cover of the Privacy Rule. Aggarwal et al. surveyed patients at a large teaching hospital in the U.K. and found most were comfortable sharing health data with the National Health Service and academic institutions, but only 26.4% were comfortable sharing with commercial organizations.19 Due to a paucity of research in this specific area, less is known about the attitudes toward AI in medicine in the U.S., but we do know trust in healthcare systems is declining.20
Char et al. describe a tension between improving health outcomes and generating profit in U.S. healthcare, which only stands to increase with the explosion of AI and big tech in medicine.21 Industry overtook academia back in 2014, surging ahead and releasing 32 significant ML systems in 2022, compared with just three by academia in the same year.22 This gap has surely widened in the years since.
Collaboration between academic institutions and industry is well established and has proved fertile ground for many advances in medicine, including pharmaceuticals, but we must acknowledge that industry is synonymous with a for-profit business model.