A systematic assessment has revealed that clinical practice guidelines for osteoporosis screening vary in quality and their recommendations often differ. Lamia H. Hayawi, a research assistant at Pallium Canada, Ottawa, and colleagues used the Appraisal of Guidelines for Research and Evaluation (AGREE II) instrument and the Institutes of Medicine standards for trustworthy guidelines to measure guideline quality. The researchers found the clinical practice guidelines have not improved over a 14-year period. Their findings were published online in PLoS One.1
Clinical practice guidelines should consist of recommendations for assessment and/or management of a specific disease. In 2010, an international team of researchers developed the AGREE II instrument to define the essential components of a good guideline. This tool is comprehensive and covers implementation and dissemination issues related to guidelines. However, it does not assess the content of the guidelines. In 2011, the Institutes of Medicine standards were created to aid in the development of quality evidence-based guidelines. Among other things, these standards evaluate the foundational evidence of the guidelines. Both tools evaluate the influence of funding bodies and conflicts of interest.
The researchers identified and assessed 33 guidelines for screening for osteoporosis that were published in English between 2002 and 2016 from 13 countries. Although the guidelines were based on country-specific data and cost-effectiveness and would naturally vary by country, the authors found the guidelines varied even within the same country. They found the most variability in recommendations for screening of individuals without previous fractures and the most consistency in recommendations for the sites of bone mineral density testing.
When the authors analyzed the guidelines using the AGREE II instrument, they calculated the highest mean AGREE II domain scores were for clarity of presentation and scope and purpose, and the lowest domain scores were for applicability and editorial independence. Moreover, they found most guideline developers did not seek the views and preferences of patients when developing guidelines.
“By assessing the compliance of guidelines to the criteria of the [Institutes of Medicine] standards, we found that 64–67% of guidelines fulfilled the standards for establishing evidence, strength of recommendations and systematic review standards,” write the authors. “However, most guidelines fell short in involving patients and public representatives in their guideline development and didn’t adequately describe the method for external review. Though, the [Institutes of Medicine] standards were developed in 2011, we found few studies that assessed the quality of [clinical practice guidelines] using these standards.”