In a randomized, controlled trial, the risk difference between groups is interpreted as a causal effect of the treatment, according to Seoyoung C. Kim, MD, ScD, MSCE, an associate professor of medicine in the Division of Pharmacoepidemiology and Pharmacoeconomics and the Division of Rheumatology, Inflammation and Immunity at Brigham and Women’s Hospital and Harvard Medical School, and an instructor in epidemiology at the Harvard T.H. Chan School of Public Health, Boston.
But when a randomized, controlled trial can’t be conducted, well-designed and well-executed observational analyses can be useful for causal inference. Dr. Kim says estimation of causal effects in such studies is challenging, but doable with careful methodological consideration.
Dr. Kim presented this and other information on the key concepts of causal inference and mediation analysis in a virtual course sponsored by the VERITY grant (Value and Evidence in Rheumatology Using Bioinformatics and Advanced Analytics) on March 4. Through this and other offerings, VERITY is helping promote highly rigorous research in clinical topics in rheumatology.
Causal Inference
In her presentation, Dr. Kim focused on topics related to causal inference, the process of determining the independent, actual effects of a component within a system. These can be visualized with the help of directed acyclic graphs, which can be used as tools to think through the possible causal ways a variety of factors might interact.
Dr. Kim discussed multiple common mistakes researchers make in constructing their studies and repeatedly emphasized the importance of correct initial study design. Various statistical methods, where appropriate, are also important to help minimize confounding, such as multivariable adjustment, stratification, propensity score methods, etc. However, Dr. Kim added, “Even if you have all kinds of fancy statistical methods, if your design is wrong, it will not save your study.”
Although several different kinds of observational studies are available to researchers, Dr. Kim emphasized that to infer a causal effect, the treatment exposure must occur prior to assessed outcomes. Thus, cross-sectional, case series or case control studies are not well suited to causal inference.
Dr. Kim made an important distinction between common causes in a network of events (e.g., confounders) and common effects (i.e., colliders in the language of causal inference). Confounders are variables that causally influence both the original event and the outcome being studied, whereas colliders are factors that may be causally influenced by both the original event and the studied outcome.
Although it is critical to make statistical adjustments for common causes to remove confounding, adjusting for common effects will introduce selection bias into the results. “The difficult part is that it is not always clear which is a confounder [and which is a collider] unless you set your timeline correctly,” she explained. “Also, you need to have expert knowledge to determine these factors. Not all statistical methods can tell you which is a confounder and which is a collider.”
Dr. Kim also warned against a common study design in which nonusers of a treatment are compared with prevalent users (e.g., current users or ever users). In other words, patients using a drug of interest are compared to those not taking any treatment at all. But in clinical practice, there may be important confounding reasons why a patient might not be prescribed a treatment, such as increased frailty or less severe symptoms.
“If you happen to have a similar drug to use as a reference to the drug of interest, that is, an active comparator design, the unmeasured confounder will be much less,” Dr. Kim explained.