Chatbots are not a new concept, but have recently gained popularity and traction. Launched in late 2022, ChatGPT (Chat Generative Pre-Trained Transformer) is a web-based platform designed to simulate interactive conversations and deliver real-time data. It has quickly become a tool that provides instantaneous information that can be more focused than a Google search.1 We, among many of our peers, became rapidly amused and excited at the prospect of using a new digital assistant to optimize our workflow.
A problem not unique to anyone in rheumatology is the constant effort of creating prior authorization requests, and later, tackling the appeal process. In fact, after weeks of fighting with an insurance company for an off-label use of an *insert expensive biologic medication name here* for an *insert rare rheumatologic condition here*, the temptation to use this new robotic assistant to draft an appeal letter became incredibly enticing, despite templates being institutionally available. However, we had second thoughts before entering the prompt, especially thoughts of an ethical dilemma.
Ethical Quandary
Suddenly, new questions arose: How specific could we get? Would the data be stored? Would this be considered a breach of patient autonomy and privacy? By feeding this chatbot information about an individual’s rare diagnosis, are we inadvertently compromising confidentiality and disclosing protected health information (PHI)? To what extent can we safely expose pertinent pieces of the puzzle without unintentionally revealing PHI?
Within the scope of the 18 PHI identifiers, the more obvious ones include name, address, birthdate, medical record number, Social Security number, as well as demographic data (e.g., age, gender, race) that relate to “an individual’s past, present or future physical or mental health or condition.”2,3 Some less intuitive identifiers, however, include geographic information smaller than a state of residence, admission/discharge dates and any unique identifying characteristics, such as comorbidities and previous treatments tried and failed.3,4 The latter are relevant to said letter when trying to customize it while also maximizing efficiency.
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) implemented a Privacy Rule that aims to protect health information and patient confidentiality while still permitting its use to uphold high value care.2 Clinicians are required to have training on HIPAA, and we now need to include more details on how to avoid the pitfalls of privacy breaches when using artificial intelligence (AI) enabled tools.
Even with the best of intentions, unintentional HIPAA violations occur on a regular basis. The information fed into ChatGPT is not confidential and is submitted to, and stored in, the servers of the company that owns it, OpenAI, which is not a protected health privacy network.5 This violation could subject one to legal trouble, so be forewarned.4