In the simplest terms, anonymized patient health information, often collected by nonprofit medical centers, is used by for-profit companies to build AI-enabled systems that are then sold back to healthcare providers. This structure has allowed rapid innovation, but often lacks transparency, leaving little room for patient choice, their values or preferences. Establishing new research methodologies that protect patient autonomy and define the ethical responsibilities of researchers in AI is crucial if we are to prevent the further erosion of trust in our healthcare system.
If we include the removal of barriers to voluntary decision making in our conceptualization of patient autonomy, then we must also reconsider how we approach informed consent.
In the case of breast cancer screening, already grappling with balancing the benefits of early detection with risk of overdiagnosis, AI-enabled software may exacerbate these risks and benefits. Freeman et al. conducted a systematic review of AI mammographic systems and found mixed results on accuracy compared with single and double human readers, as well as variable sensitivity and specificity, some of which was ascribed to intrinsic differences in AI systems.7
As these AI-enabled systems enter clinical practice, should their use be explained to the patient as part of the informed consent process? Are we able to quantify the added risk or benefit of a particular system for a particular patient, and do they have a choice in receiving this AI-enabled care?
The right to privacy is often linked with autonomy because the violation of privacy threatens an individual’s sense of self. Generative AI poses novel risks to patient privacy not seen in ML algorithms. LLMs demonstrate memorization, and the larger the model, the more data are memorized.
Carlini et al. showed an LLM memorizing at least 1% of its training dataset, leaving significant data open to a variety of attacks, including “training data extraction attacks.”23 These malicious attacks have the potential to leak identifiable patient information and may represent a risk that is difficult to quantify, let alone guard against, given the inherent opacity of foundation models and their development in the private sector.
In Sum
As with all technological endeavors, acceleration is a given and, as we witnessed with the explosion of information transmission across the globe upon the introduction of the World Wide Web, AI will quickly permeate our lives, inside and outside medicine. Although work remains to be done, we can already see the potential benefits of AI, for both our patients and our strained healthcare systems, in the use of ML algorithms for colon, breast and lung cancer detection.24