As we enter a new era of AI, regulatory frameworks governing its use must contend with systems that are more powerful and more opaque than ever. The machine learning algorithms regulated by the FDA are trained on labeled data to perform a specific task via a supervised learning approach, whereas foundation models are trained via self-supervised learning on unlabeled data.11 In the case of clinical language models (CLaMs) and foundation models for electronic medical records (FEMRs), those unlabeled data are biomedical text and patient medical history from the EHR, respectively.12 Hundreds of billions of parameters may be fed into the model, but there is no labeled truth as there is in traditional machine learning—no specific task to learn up front.12
Training of clinical models requires massive datasets of patient information, shared among institutions and globally, presenting new challenges for data protection and appropriate use.13 In the case of most FEMRs, model weights—a parameter of neural networks that helps determine flow of information—are not widely available to the research community, necessitating retraining of models on new EMR data to validate performance.12
Bommasani et al. point out that accessibility is threatened by the massive scale of foundation models and the vast resources required to interrogate them, resources not often available to academic institutions, and by the concentration of foundation model development with a handful of large players in big tech.14
Assessing performance, algorithmic bias and reliability, and ensuring privacy and safety, along with a slew of other important metrics, will become increasingly difficult in the era of foundation models.
As of October 2023, the FDA had not approved any device using generative AI or foundation models, including LLMs. The regulatory landscape is well outside the scope of this article, but it is fair to say that innovation in AI is outpacing oversight and regulation, both nationally and globally.
There are ethical considerations and potential concerns at every juncture in the application of AI in medicine—some that are already well defined, as exemplified by our current ML-enabled devices and others that are only now emerging; and still more to be discovered in the rapidly changing landscape of generative AI.
In this article, we examine just a few of the many ethical considerations surrounding the current and projected use of AI in medicine, focusing on the bioethical principle of autonomy. Further discussion of ethics in medical AI is warranted, with an exploration of justice, including bias, fairness and equity, of great importance.