Chatbots are not a new concept, but have recently gained popularity and traction. Launched in late 2022, ChatGPT (Chat Generative Pre-Trained Transformer) is a web-based platform designed to simulate interactive conversations and deliver real-time data. It has quickly become a tool that provides instantaneous information that can be more focused than a Google search.1 We, among many of our peers, became rapidly amused and excited at the prospect of using a new digital assistant to optimize our workflow.
A problem not unique to anyone in rheumatology is the constant effort of creating prior authorization requests, and later, tackling the appeal process. In fact, after weeks of fighting with an insurance company for an off-label use of an *insert expensive biologic medication name here* for an *insert rare rheumatologic condition here*, the temptation to use this new robotic assistant to draft an appeal letter became incredibly enticing, despite templates being institutionally available. However, we had second thoughts before entering the prompt, especially thoughts of an ethical dilemma.
Ethical Quandary
Suddenly, new questions arose: How specific could we get? Would the data be stored? Would this be considered a breach of patient autonomy and privacy? By feeding this chatbot information about an individual’s rare diagnosis, are we inadvertently compromising confidentiality and disclosing protected health information (PHI)? To what extent can we safely expose pertinent pieces of the puzzle without unintentionally revealing PHI?
Within the scope of the 18 PHI identifiers, the more obvious ones include name, address, birthdate, medical record number, Social Security number, as well as demographic data (e.g., age, gender, race) that relate to “an individual’s past, present or future physical or mental health or condition.”2,3 Some less intuitive identifiers, however, include geographic information smaller than a state of residence, admission/discharge dates and any unique identifying characteristics, such as comorbidities and previous treatments tried and failed.3,4 The latter are relevant to said letter when trying to customize it while also maximizing efficiency.
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) implemented a Privacy Rule that aims to protect health information and patient confidentiality while still permitting its use to uphold high value care.2 Clinicians are required to have training on HIPAA, and we now need to include more details on how to avoid the pitfalls of privacy breaches when using artificial intelligence (AI) enabled tools.
Even with the best of intentions, unintentional HIPAA violations occur on a regular basis. The information fed into ChatGPT is not confidential and is submitted to, and stored in, the servers of the company that owns it, OpenAI, which is not a protected health privacy network.5 This violation could subject one to legal trouble, so be forewarned.4
A data breach must typically be reported to an enforcement agency within the U.S. Department of Human Health Services (HHS), with each affected patient case leading to an individualized and costly investigation (in some cases up to $50,000).4,5 This would also require disclosure of the breach to the affected party and the public.5 One saving grace is that OpenAI does not always use or view the information, and it has procedures to delete accounts and information within 30 days.5
In our humble opinion, it just doesn’t seem worth it. We probably would end up spending more time thinking of appropriate verbiage to stay within compliance, as well as rephrasing and editing that draft, than we would if we just started from scratch in the first place.
Insurance Companies Test the Boundaries
It doesn’t seem that insurance companies are quibbling with their collective conscience. They have been using AI software to cut costs by unapologetically submitting broad denials.
If you feel personally victimized, you’re not alone, and you genuinely may have been. Class action lawsuits were filed against United Healthcare and Cigna in 2023 for automatically denying and overriding certain physician recommendations using flawed AI algorithms and without ever actually opening or reviewing the documents.6 In fact, their error rate was in excess of 90%, with a further investigation revealing “that over a period of two months a Cigna doctor can deny 300,000 requests for payment and only spend an average of 1.2 seconds per case.”7,8
After a dozen years of education and training—not to mention the time we put into caring for each individual patient and documenting each cerebral thought—we can then get fraudulently told “no” by a robot (not a peer) in under two seconds? Not only is our time and professional input being ignored and undervalued, but our patients are also experiencing potentially serious delays to appropriate treatment. This is unethical. The current strategy of cutting corners by insurance companies is not new, and the misappropriation of AI may continue unless we shed light on this unethical practice and advocate for our patients and ourselves.
In Sum
We can learn from the insurance companies’ mistakes in using this new platform to improve work efficiency, but we need to be mindful and educated in how to use AI safely and fairly. Could we minimize the input provided while maximizing its utility? Could we double check that no components of HIPAA are being disclosed, and then have at it? Or do we need to wait until the software is integrated into our electronic health systems and let the worry of committing a data breach float away from our subconscious?
The bottom line: ChatGPT can be your assistant, but not a trustworthy enough one to keep a secret. So make sure specific personal information is withheld and HIPAA security is maintained.
Biana Modilevsky, DO, is a rheumatology fellow at the University of Arizona Arthritis Center, Tucson.
Kabita Nanda, MD, is an associate professor of pediatrics at Seattle Children’s Hospital and University of Washington School of Medicine.
References
- Marr B. A short history of ChatGPT: How we got to where we are today. Forbes. 2023 May 19. https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/?sh=311bcfc0674f.
- Summary of the HIPAA privacy rule. U.S. Department of Health and Human Services. 2022 Oct 19. https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html.
- Loyola University Chicago. The 18 HIPAA identifiers. Information Technology Services. 2024. https://www.luc.edu/its/aboutus/itspoliciesguidelines/hipaainformation/the18hipaaidentifiers/.
- What are the penalties for HIPAA violations? The HIPAA Journal. https://www.hipaajournal.com/what-are-the-penalties-for-hipaa-violations-7096/#whatconstitutesahipaaviolation.
- Kanter GP, Packel EA. Health care privacy risks of AI chatbots. JAMA. 2023 Jul 25;330(4):311–312.
- Lopez I. UnitedHealthcare accused of AI use to wrongfully deny claims (1). Bloomberg Law. 2023 Nov 14. https://news.bloomberglaw.com/health-law-and-business/unitedhealthcareaccused-of-using-ai-to-wrongfully-deny-claims.
- CASE 0:23-cv-03514. United States District Court District of Minnesota. 2023 Nov 14. https://aboutblaw.com/bbs8.
- Rucker P. How Cigna saves millions by having its doctors reject claims without reading them. Propublica. 2023 Mar 25. https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims.