A data breach must typically be reported to an enforcement agency within the U.S. Department of Human Health Services (HHS), with each affected patient case leading to an individualized and costly investigation (in some cases up to $50,000).4,5 This would also require disclosure of the breach to the affected party and the public.5 One saving grace is that OpenAI does not always use or view the information, and it has procedures to delete accounts and information within 30 days.5
In our humble opinion, it just doesn’t seem worth it. We probably would end up spending more time thinking of appropriate verbiage to stay within compliance, as well as rephrasing and editing that draft, than we would if we just started from scratch in the first place.
Insurance Companies Test the Boundaries
It doesn’t seem that insurance companies are quibbling with their collective conscience. They have been using AI software to cut costs by unapologetically submitting broad denials.
If you feel personally victimized, you’re not alone, and you genuinely may have been. Class action lawsuits were filed against United Healthcare and Cigna in 2023 for automatically denying and overriding certain physician recommendations using flawed AI algorithms and without ever actually opening or reviewing the documents.6 In fact, their error rate was in excess of 90%, with a further investigation revealing “that over a period of two months a Cigna doctor can deny 300,000 requests for payment and only spend an average of 1.2 seconds per case.”7,8
After a dozen years of education and training—not to mention the time we put into caring for each individual patient and documenting each cerebral thought—we can then get fraudulently told “no” by a robot (not a peer) in under two seconds? Not only is our time and professional input being ignored and undervalued, but our patients are also experiencing potentially serious delays to appropriate treatment. This is unethical. The current strategy of cutting corners by insurance companies is not new, and the misappropriation of AI may continue unless we shed light on this unethical practice and advocate for our patients and ourselves.
In Sum
We can learn from the insurance companies’ mistakes in using this new platform to improve work efficiency, but we need to be mindful and educated in how to use AI safely and fairly. Could we minimize the input provided while maximizing its utility? Could we double check that no components of HIPAA are being disclosed, and then have at it? Or do we need to wait until the software is integrated into our electronic health systems and let the worry of committing a data breach float away from our subconscious?