Opioid prescribing began to increase appreciably in the 1990s. From 1990 to 1996, prescriptions for opioids increased from 2 million per year to 8 million per year. Many states passed intractable pain acts intended to protect physicians who, in good faith, prescribed chronic opioids for chronic pain.3
Pain became the fifth vital sign, advocated by the American Pain Society Quality of Care Committee in 1995.4 The Centers for Medicare and Medicaid Services employed a patient satisfaction survey in determining reimbursement for hospital services that asked: “How often did the hospital or provider do everything in their power to control your pain?” Obviously, if you measure a vital sign and it is abnormal, something must be done to address that. The quality of pain relief became a common topic of patient satisfaction surveys overall. Physicians were criticized for poor patient satisfaction because they refused to prescribe medications for pain.
In 1995, sustained-release oxycodone (OxyContin) was approved, and the FDA-approved labeling stated that iatrogenic addiction was “very rare.” The OxyContin tablet was purported to be abuse resistant. In 2001, the Joint Commission on the Accreditation of Healthcare Organizations introduced standards “as part of a national effort to address the widespread problem of underassessment and undertreatment of pain.”5,6
In 2010, propoxyphene was taken off the market by the FDA due to arrhythmia concerns, with the recommendation that it be replaced with codeine, morphine or oxycodone.
Key Hypothesis Never Tested
The first big mistake was to assume that the treatment of chronic non-malignant pain would be like that of treating the chronic pain of cancer patients in the last months of their lives.
This hypothesis was not tested; rather, a huge leap was taken in believing the treatment of chronic non-malignant pain would follow the model of treating cancer pain. This was a logical and intuitive assumption, but with our scientific, evidence-based approach to medical practice, we know that a common-sense assumption often does not pass muster when subjected to a prospective, randomized, double-blind clinical trial.
So why was this hypothesis never tested? I suspect two important reasons. Such trials cost millions of dollars and are financially feasible only for companies seeking FDA approval for an investigational medication. Moreover, such trials are typically done over a relatively short term to gain FDA approval, especially for pain relief, not the many years appropriate for a chronic problem. Who would do such a study over many years, and how would it be funded?