Since the release of ChatGPT in November 2022, large language models (LLMs) have entered the mainstream and taken the world by storm. LLMs appear in the news frequently, with such headlines as “ChatGPT Passed the USMLE [U.S. medical license exam].”1 What does this mean for medicine? How will artificial intelligence be integrated into the field, and will it eventually replace physicians?
The short answer is no: Physicians will not be replaced—at least not for the forseeable future. However, physicians need to learn how to effectively use LLMs and other technological advances to manage an ever-increasing workload and for the benefit of patients. This article explores the future of LLMs, such as ChatGPT, in medicine, while also addressing some of the associated legal concerns.
What Are LLMs?
Many of us have interacted with some form of artificial intelligence (AI). For example, Siri and Alexa are both forms of AI that many of us interact with daily.2 According to an article from WEKA, generative AI is a type of machine learning that allows users to generate new content. LLMs are a specific type of generative AI focused on generating natural language text. LLMs are trained on large text datasets, including articles, books and websites. They learn the patterns and structures of language and use this knowledge to answer questions and generate written content. The text is called natural language because it is difficult to distinguish generative AI text from that written by a human.3
Recently, LLMs have become almost synonymous with ChatGPT; however, ChatGPT is not alone in the space. Other large language models include Jenni, Bing AI, CoPilot AI and DALL-E. Jenni advertises itself as being able to help users “write faster and better.” DALL-E generates art. Bing AI has integrated an LLM into its search engine. Copilot assists users by providing an autocomplete to help them generate code. However, for the scope of this article, we mainly focus on LLMs that generate natural language text, such as ChatGPT and Bard.
LLMs offer healthcare team members a wealth of possibilities, including summarizing information, generating patient-specific handouts & potentially suggesting diagnostic & treatment options in the future.
Uses in Medicine
LLMs’ strengths lie in their ability to summarize information. This can be helpful when physicians want to summarize topics for patients. For example, an LLM allows physicians to tailor handouts specifically to a patient’s individual needs. Although physicians need to check and proofread the summarized material generated by LLMs, the tool can significantly reduce the time and effort required to develop the handouts.
Another possible application of LLMs is in responding to patient emails or MyChart messages. This can decrease the time spent formulating responses by incorporating information into cohesive explanations. Of note, patient information and data protected by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) should not be put into LLMs. More on this later.
LLMs can improve communication between physicians and patients who do not speak English. LLMs can translate handouts, MyChart messages and email correspondence into patients’ native languages. This can lead to more transparent and more effective communication, allowing physicians to better respond to the needs of their patients and reducing errors.
Could LLMs aid clinicians in writing prior authorizations? While LLMs provide excellent summaries and rough drafts for clinicians, rheumatologists and rheumatology professionals must carefully review and refine the content. Prior authorizations often require detailed and specific patient information that LLMs may not provide. The final responsibility and liability still fall on the physician, who must ensure the accuracy and completeness of any prior authorization request.
Doximity recently released DocsGPT.com, which integrates ChatGPT with Doximity to help “cut the scut,” as advertised. When ChatGPT was first released, some touted it as able to cite literature. We warn against using ChatGPT for this purpose because it has been known to fabricate sources, a result known as “hallucinating.”4 DocsGPT tries to address this issue by warning the user to edit for accuracy before sending and by including parentheses for relevant information the physician should insert.
Another application for LLMs would be as an add-on in the electronic health record. It could summarize patient health information to allow physicians to more easily review a patient’s chart. Simple queries, such as “What is this patient’s rheumatology history?” would include a list of diagnoses, laboratory values and previously prescribed medications.
Additionally, LLMs could assist physicians during patient visits by suggesting additional laboratories, imaging tests and medications associated with the patient’s condition. In the future, LLMs may play a more significant role in diagnosing and analyzing a patient’s profile to suggest appropriate medications. This will augment the physician’s clinical decision making, not replace it.
Nuance recently launched DAX Express combined with OpenAI’s GPT-4, which is advertised as being able to listen to audio from a patient encounter and generate a note.5 This is part of an ongoing effort to reduce physicians’ workloads. Moreover, it may restore the human connection in medicine and strengthen the physician-patient relationship. For example, it will allow the physician to maintain eye contact with a patient in lieu of looking at a computer screen, thus, creating a more patient-centered atmosphere.
LLMs offer healthcare team members a wealth of possibilities, including summarizing information, generating patient-specific handouts and potentially suggesting diagnostic and treatment options—a potential future benefit. Nevertheless, it is essential clinicians thoroughly review and refine the content created by LLMs because their accuracy cannot be guaranteed. Moreover, LLMs can enhance communication and minimize errors, especially when communicating with non-English-speaking patients. As technology advances, the role of LLMs in healthcare is destined to expand.
Limitations
At this time, one of the main limitations of using LLMs in medicine is the inability to generate citations. While using ChatGPT to write a case report, I realized it had generated statistics I could not find in the literature. When I asked for a citation, ChatGPT generated a fake citation, complete with a digital object identifier (DOI). The citation, at first glance, did not appear fake until I searched for an article that was nowhere to be found. This is concerning.
Physicians must understand what LLMs are designed to do, as well as their limitations.
Although LLMs cannot currently cite sources accurately, I predict that in the near future, LLMs will be able to provide comprehensive literature reviews complete with citations. In fact, LLMs may come to rival such medical resources as UpToDate.
Physician Well-Being
AI is permeating medicine, and physicians need to familiarize themselves with its potential implications. To fully maximize the benefits of LLMs in healthcare, physicians should advocate for solutions to their top patient care concerns, ensuring these tools address their specific needs and challenges. By taking an active role in shaping the development and implementation of LLMs in medicine, physicians can ensure it serves their interests and those of their patients.
Numerous products claim to reduce administrative work and alleviate physician burnout, but could LLMs be the solution? Physicians are tired of hearing from employers that they need to exercise more or meditate to reduce burnout. In a time of increasing physician burnout, healthcare employers have a duty to help alleviate the administrative burden placed on physicians. Doing so will allow physicians to devote more time to caring for patients. LLMs offer a promising solution to this problem.
Legal Implications
As LLMs become more prevalent in healthcare, physicians need to be mindful of the legal implications of using them to share patient information. The benefits of LLMs include customizing patient handouts, summarizing medical topics and improving patient care. However, HIPAA compliance is of particular concern. It is important to note that no case law currently discusses this new age of LLMs. However, based on existing case law, it is possible to analyze and understand the legal implications of using LLMs in healthcare.
HIPAA is a federal law that sets national standards for protecting the privacy and security of patients’ health-related information, among other things.6 Physicians must follow HIPAA regulations to avoid potential legal consequences.7 While HIPAA provides a minimum standard for health information privacy, states can enact more stringent rules to better protect their citizens.8 Make sure to learn the patient privacy and security rules in your state prior to using LLMs to aid in patient care.
One of HIPAA’s primary functions is to ensure patients have control over their health-related information.9 This includes, but is not limited to, any information related to a patient’s physical or mental health, the medical care they received or the cost of their medical care.10 If a patient refuses consent, their health-related information cannot be shared with an LLM.11 If a physician violates HIPAA by sharing protected health information without consent, they may face civil or criminal penalties.12 However, only the Secretary of the U.S. Department of Health and Human Services can enforce these penalties, not the individual affected by the violation.13 In other words, a patient cannot sue a physician because the physician violated the security and privacy requirements of HIPAA.
Integrating an LLM into an existing electronic health record, such as Epic, may help ensure that all patient information is stored and shared through a secure system. However, physicians should exercise caution when using LLMs and ensure they fully comply with HIPAA regulations.
While physicians need to be mindful of the legal implications of using LLMs to share patient information, physicians can also benefit from these powerful tools in ways that don’t require a patient’s consent. If a physician uses an LLM to generate general health information without including any identifiable patient data, they would likely be safe from HIPAA violations. For example, a physician could use an LLM to summarize current research on rheumatoid arthritis, provide general tips for symptom management or offer common medication side effect information. In these cases, the physician would not share any health-related information that could identify a specific patient.
Conclusion
LLMs offer significant benefits in the field of medicine. They have the potential to assist physicians in generating patient handouts, responding to patient messages and even suggesting diagnostic and treatment options in the future. LLMs cannot replace the expertise of physicians in providing humanistic care to patients. Instead, they can help physicians manage an ever-increasing workload and improve communication between patients and clinicians.
Despite the advantages, the legal concerns surrounding LLMs cannot be ignored. Currently, very little law directly addresses this new age of large language models. However, with the recent release of a proposed federal bill, it is clear these concerns are beginning to be addressed. As the use of LLMs in medicine continues to evolve, it will be important to establish clear guidelines and regulations to ensure their responsible use and prevent any potential harm.
Jacqueline Jansz, MD, is a second-year resident at the University of Illinois at Chicago. She plans to apply for rheumatology fellowship this year and has a special interest in the use of artificial intelligence in healthcare.
Peter T. Sadelski, JD, is an Illinois-based attorney who focuses on civil rights and employment law. He has a strong skill set in legal writing and research, which allows him to provide his clients with effective representation and clear explanations of legal concepts.
References
- Lubell J. ChatGPT passed the USMLE. What does it mean for med ed? American Medical Association. 3 Mar 2023. http://www.ama-assn.org/practice-management/digital/chatgpt-passed-usmle-what-does-it-mean-med-ed.
- Marr B. Are Alexa and Siri considered AI? Bernard Marr & Co. 15 Jul 2021. https://bernardmarr.com/are-alexa-and-siri-considered-ai/.
- Generative AI: Understanding the next wave of artificial intelligence. WEKA. 2023 Mar 27. https://www.weka.io/learn/ai/generative-ai-understanding-the-next-wave-of-artificial-intelligence/.
- Metz C. What makes A.I. chatbots go wrong? The curious case of the hallucinating software. The New York Times. 29 Mar 2023. Updated 4 Apr 2023. https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html.
- Capoot A. OpenAI-powered app from Microsoft will instantly transcribe patient notes during doctor visits. CNBC. 20 Mar 2023. www.cnbc.com/2023/03/20/microsoft-nuance-announce-clinical-notes-application-powered-by-openai.html.
- Jackson v. Wexford Health Sources, Inc. et al. No. 3:2020cv00900.
- Stewart v. Parkview Hosp. No. 19-1747 (7th Cir. 2019).
- Ella G. Alexander Wade v. Felice A. Vabnick-Wener, MD. No. 2:09–cv–2275–V.
- Law v. Zuckerman. 307 F. Supp. 2d 705 (D. Md. 2004).
- Northwestern Memorial Hospital v. Ashcroft. 362 F.3d 923, 933 (7th Cir. 2004).
- Northwestern Memorial Hospital v. Ashcroft. 362 F.3d 923, 929 (7th Cir. 2004).
- Jackson v. Powers. No. 22-CV-496-JPS. 2022 WL 4448919, at *2 (E.D. Wis. Sept. 23, 2022).
- Stewart v. Parkview Hospital. 940 F.3d 1013, 1015 (7th Cir. 2019).