CHICAGO—Every minute, it seems, a new digital tool is introduced in medicine. Whether it’s a new digital measuring stick, a new data-crunching system or a new app, the tech tools form an endless convoy of options. But are they worth it? Will they really help you do your job better? Will they help patients feel better? Will they make things faster and easier? Does research back them up? A panel of experts gathered at the 2018 ACR/ARHP Annual Meeting to talk about just that.
The experts noted encouraging digital advances that appear poised to help in the realm of patient-reported outcomes, as well as to address challenges in collecting data, tracking flares, assessing medication adherence and patient education.
Patient-Reported Outcome Info
Clifton “Bing” Bingham, MD, director of the Johns Hopkins Arthritis Center, Baltimore, said PROMIS, or the Patient-Reported Outcome Measurement Information System, is showing benefits in rheumatology. This 10-year project by the National Institutes of Health, now seeing more use in clinical practice, blends advanced information technology with psychometrics and qualitative, cognitive and health survey research to create a nimbler and more usable source of patient information. It provides a more accurate picture of how patients experience their disease than such commonly used tools as the Simple Disease Activity Index, Clinical Disease Activity Index and Disease Activity Score, which tend to miss “things that patients are telling us are important,” Dr. Bingham said.
Many patient-reported outcome tools developed and used primarily in clinical trials may have limited relevance to patients seen in clinical practice and not be actionable by the provider, he said. In contrast, PROMIS measures such areas as pain, fatigue, physical functioning and emotional distress, making it relevant to many chronic and rheumatic diseases. It produces scores calibrated against the U.S. population using T-scores; the mean population score is 50 with a standard deviation of 10, so anything outside the 45–55 range is considered an outlier.
Measures may be administered with computer adaptive testing, starting with questions in the middle of the range and moving to questions positioned halfway between the latest response and the extreme on that side. Questions assessing physical function could include, “Are you able to get in and out of bed?” or “Are you able to stand without losing your balance for one minute?” This method of computer-assisted questioning allows you to obtain a precise score quickly, Dr. Bingham said. Fixed-length short forms are also available.
The scoring system places patients along a wide spectrum that doesn’t have the floor or ceiling effects that can occur with other measures, he said. For example, a paper assessing PROMIS scores found rheumatoid arthritis (RA) patients had scores all the way from 2 standard deviations worse than the norm up to 2 standard deviations better than the norm across many domains. “For each patient, the PROMIS instrument was able to identify those patients and put them at a point along the continuum,” he said.
Over time, you can look at what is normal for a given patient and then assess the change, Dr. Bingham said. “It means that [for] someone who’s a marathon runner, you can actually evaluate when they can run only five miles or when they can only walk two miles.”
A comparison of patient scores on the Modified Health Assessment Questionnaire (mHAQ) and PROMIS found that although 40% of patients scored zero on the mHAQ, indicating their health was fine, physical function scores on PROMIS tilted toward worse function, providing a truer picture of what was going on.1 PROMIS scores also have been shown to approach or even exceed normative values when patients are in remission, showing the potential rewards when a remission goal is reached.
Dr. Bingham said the group has been using PROMIS measures in clinical practice and studying the impact on patient-physician communication and decision making. “Patient-reported outcome information,” he said, “gave new information that clinicians, at the time of the visit, thought was actionable.”