CER Methods Webinars


The following webinars are intended to provide educational sessions on methodological issues pertinent to the ARRA-funded CER as well as new methodological development of general interest for scientists in CER.  These 1-hour monthly meetings consist of a presentation followed by discussion/Q&A.

The following links will bring you to recordings resulting from these CER Methods Webinars.


Sensitivity Analyses in CER
Sebastian Schneeweiss, MD, ScD
Brigham & Women’s Hospital DECIDE Methods Center
Presentation Date: December 13, 2012

Sensitivity analyses are critical tools in comparative effectiveness research to understand the assumptions and robustness of findings to support causal inference. This presentation demonstrates the principles of sensitivity analyses for residual confounding and provides illustration for the use of a simple spreadsheet for conducting your own analysis. This presentation will reference publicly available software for illustration, which can be found at www.drugepi.org

You may learn more about this topic from the following article:
Schneeweiss S. Sensitivity analysis and external adjustment for unmeasured confounders in epidemiologic database studies of therapeutics. Pharmacoepidemiology & Drug Safety 2006 May. 15(5):291-303.


Challenges in Predictive Modeling
Eric Johnson, PhD, MPH
Kaiser Permanente Northwest
Presentation Date: November 15, 2012

Most prognostic risk scores are designed to predict clinical events by synthesizing patient characteristics that physicians enter manually into a risk calculator. During the seminar, we’ll discuss how the design differs for risk scores that collect characteristics automatically from electronic health records such as Kaiser Permanente’s. We’ll also discuss how the design differs for pragmatic risk scores, which support physicians’ decision-making in usual care settings, compared with explanatory risk scores. Attention to these differences in design improves the translation of risk scores into routine practice. We’ll discuss those design differences using risk scores developed for Kaiser Permanente.

You may learn more about this topic from the following article:
Royston P, Moons KG, Altman DG, Vergouwe Y. Prognosis and prognostic research: Developing a prognostic model. BMJ. 2009 Mar 31;338:b604. doi: 10.1136/bmj.b604. PMID: 19336487.


Targeted Maximum Likelihood
Mark van der Laan, PhD
UC Berkeley
Presentation Date: October 18, 2012

Targeted maximum likelihood estimation (TMLE) has several attractive properties for CER. Dr. Mark van der Laan has conceived the approach and extensively published in the area. In this presentation, he provides an overview of TMLE and how it is different from traditional approaches and what it can add to CER analyses.

You may learn more about this topic from the following articles:
1.Book: M.J. van der Laan, S. Rose, Targeted Learning, Causal Inference for Observational and Experimental Data, Springer, New York, 2012.
2.M.J. van der Laan, D. Rubin (2006). Targeted Maximum Likelihood Learning. The International Journal of Biostatistics, http://www.bepress.com/ijb/vol2/iss1/11.


Healthcare Delivery Systems Evaluations
Stephen Soumerai, ScD
Harvard Medical School/Harvard Pilgrim Health Care
Presentation Date: May 17, 2012

The need for more comparative effectiveness of health care delivery systems was highlighted in the 2009 “Top 100 List of Comparative Effectiveness Research Questions” issued by the IOM Committee. Nevertheless, many methodological challenges remain in estimating the effectiveness of healthcare delivery systems that are both generalizable and valid. This presentation outlines some of these challenges.


Selecting Between Propensity Scores & Disease Risk Scores
Patrick Arbogast, PhD
Kaiser Permanente Northwest
Presentation Date: April 19, 2012

Propensity scores and disease risk scores are summary variable techniques that provide advantages in certain circumstances that CER researchers might encounter. This presentation provides insight into the circumstances that favor their use and offer guidance on the relative advantages of these two approaches.

You may learn more about this topic from the following article:
Arbogast PG, Ray WA. Performance of disease risk scores, propensity scores, and traditional multivariable regression in the presence of multiple confounders. American Journal of Epidemiology 2011; 174(5): 621-629.


Adjusting for Non-adherence
S. Darren Toh, ScD
Harvard Medical School
Presentation Date: March 22, 2012

Incomplete adherence to health interventions is frequently observed, and this lack of adherence represents a challenge to CER researchers seeking to quantify the effect of an intervention. A range of analytic and study design options are available to researchers, but there is no clearly superior approach and each of the available options involves tradeoffs whose effect on the research may not be immediately apparent.

You may learn more about this topic from the following article:
Toh S, Hernán MA. Causal inference from longitudinal studies with baseline randomization. Int J Biostat. 2008 Oct 19;4(1):Article22.


Propensity Score Trimming
Til Sturmer, PhD
University of North Carolina, Chapel Hill
Presentation Date: February 16, 2012

Unmeasured frailty may lead to non-uniform treatment effects over Propensity Scores. A unique advantage of trimming those treated contrary to prediction is that it will reduce unmeasured confounding by frailty. In this presentation, Dr. Sturmer compares bias and mean squared error for various PS implementations under PS trimming, thereby excluding persons treated contrary to prediction.

You may learn more about this topic from the following article:
Stürmer T, Rothman KJ, Avorn J, Glynn RJ. Treatment effects in the presence of unmeasured confounding: dealing with observations in the tails of the propensity score distribution–a simulation study. Am J Epidemiol. 2010 Oct 1;172(7):843-54. Epub 2010 Aug 17. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3025652/pdf/kwq198.pdf


Squeezing the Balloon: Propensity Scores & Unmeasured Covariate Balancing
John Brooks, PhD
University of Iowa
Presentation Date: January 19, 2012

It has been said that propensity score algorithms “can be used to design observational studies in a way analogous to the way randomized experiments are designed” by assembling “groups of treated and control units such that within each group the distributions of covariates is balanced”. Using a simple simulation model of treatment choice, we find that propensity score algorithms require variation in the portion of the unmeasured covariates that is independent of the measured covariates, and that these algorithms exacerbate this imbalance in this variation relative to the full un-weighted sample. The results are problematic for researchers hoping to make treatment effect inferences relying only on the expectation that balancing measured covariates implies improved balance in unmeasured covariates.


Pragmatic Clinical Trials
Sean Tunis, MD, MSc, & Penny Mohr, MA
Center for Medical Technology Policy
Presentation Date: December 15, 2011

Baseline randomization is often considered essential for the valid assessment of the effectiveness of health care interventions/products regarding intended treatment effects. Traditional RCTs typically have stringent inclusion and exclusion criteria in order to maximize the effect sizes of the intervention which limits generalizability and ability to inform treatment choices in clinical practice.

Pragmatic clinical trials (PCTs) are intended to improve the generalizability of research by relaxing study inclusion criteria and other aspects of protocol-driven care while maintaining randomization necessary to minimize selection bias and other sources of confounding. This presentation demonstrates the gap that exists between the available evidence and the evidence required to inform clinical practice decisions and highlight the need to conduct more effectiveness research, like PCTs directed towards the needs of decision makers, to bridge the existing gap.


Linking and Using Claims Data & Registry Information in CER
Soko Setoguchi-Iwata, MD, DrPH
Duke Clinical Research Institute
Presentation Date: November 17, 2011

Different data sources can capture different aspects of the health history of an individual. Combining data from different sources serves to enhance the data available and strengthen inferences that can be drawn from CER. However, the linkage of data elements across different sources is difficult without unique person identifiers, specifically the types of identifiers that can lead to loss of privacy. A number of different approaches are available to conduct the linkage each with their own strengths and weaknesses.

You may learn more about this topic from the following articles:
1. Bohensky MA, Jolley D, Sundararajan V, Evans S, Pilcher DV, Scott I, Brand CA. Data Linkage: A powerful research tool with potential problems. BMC Health Services Research 2010, 10:346. http://www.biomedcentral.com/1472-6963/10/346
2. Hammill BG, Hernandez AF, Peterson ED, Fonarow GC, Schulman KA, Curtis LH. Linking inpatient clinical registry data to Medicare claims data using indirect identifiers. Am Heart J 2009;157:995-1000


Self-controlled/Case-Only Design
Malcolm Maclure, ScD
University of Victoria
Presentation Date: October 20, 2011

Self-controlled (case-only) designs are a strategy for controlling unmeasured confounding and selection bias. In pharmacoepidemiology, illness often influences future use of medications, making a bidirectional design problematic. This presentation outlines the use of self-controlled design in CER and demonstrates how the approach can adjust for exposure trends observed across time axes.

You may learn more about this topic from the following article:
Wang S, Linkletter C, Maclure M, Dore D, Mor V, Buka S, Wellenius GA. Future cases as present controls to adjust for exposure trend bias in case-only studies. Epidemiology. 2011 Jul;22(4):568-74. doi: 10.1097/EDE.0b013e31821d09cd. PubMed PMID: 21577117; PubMed Central PMCID: PMC3110688. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3110688/


Missing data: definitions, methods, and caveats
Jason Roy, PhD
University of Pennsylvania
Presentation Date: June 23, 2011

Missing data is a common occurrence in large datasets. Common assumptions are that data is missing completely at random (MCAR), missing at random (MAR), or missing not a random (MNAR). The missing data may be outcomes or covariates that can bias results. This presentation outlines how to deal with missing data in CER and provides examples and instructions when using SAS or R.


Propensity Score Analyses
John Seeger, PharmD, DrPH
Brigham & Women’s Hospital
Presentation Date: May 19, 2011

Propensity Scoring is a multivariable scoring method (logistic regression) that collapses predictors of treatment into a single value based on the probability that subject with given characteristics will receive therapy. It can be used to mitigate confounding resulting from patient characteristics involved in the selection of one therapy over another. It is advantageous when number of people with outcome is small relative to number of exposed persons and number of potential confounders is large. This presentation outlines some of its uses in CER.


Marginal Structural Models
Miguel Hernan, MD, DrPH
Harvard School of Public Health
Presentation Date: April 14, 2011

Many interesting questions in CER involve time-varying treatments. Standard methods for estimating causal effects of time-varying treatments on mortality are biased when a time-dependent risk factor for mortality also predicts subsequent treatment or a past treatment history predicts subsequent risk factor level. IP weighting is a feasible alternative to conventional confounding adjustment that allows to appropriately adjust for time-varying confounders and present appropriately adjusted absolute risks and survival curves, not only hazard ratios.

You may learn more about IP weighting and marginal structural models from the following articles:
1. Hernán and Robins. Causal Inference. Chapter 2. http://www.tc.umn.edu/~alonso/hernanrobins_v1.10.11.pdf
2. Hernán MA, Brumback B, Robins JM. Marginal structural models to estimate the causal effect of zidovudine on the survival of HIV-positive men. Epidemiology. 2000 Sep;11(5):561-70. PubMed PMID: 10955409.
3. Robins JM, Hernán MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology. 2000 Sep;11(5):550-60. PubMed PMID: 10955408.


Adjustment of main study results with supplementary data gathering
Sebastian Schneeweiss, MD, ScD
Brigham & Women’s Hospital
Presentation Date:  March 17, 2011

Large health care utilization databases are frequently used to analyze unintended effects of prescription drugs and biologics. Confounders that require detailed information on clinical parameters, lifestyle, or over-the-counter medications are often not measured in such datasets, causing residual confounding bias. This presentation describes how estimates of drug effects can be adjusted for confounders that are not available in the main, but can be measured in a validation study.

You may learn more about this topic from the following articles:
1. Schneeweiss S. Sensitivity analysis and external adjustment for unmeasured confounders in epidemiologic database studies of therapeutics. Pharmacoepidemiol Drug Safety, 2006;15:291-303.
2. Sturmer T, Glynn RJ, Avorn J, Rothman K, Schneeweiss S. Adjustments for unmeasured confounders in pharmacoepidemiology database studies using external information. Medical Care 2007;45:S158-65.


Immortal and immeasurable person-time
Sebastian Schneeweiss, MD, ScD
Brigham & Women’s Hospital
Presentation Date: November 18, 2010

Immortal time bias is person-time that is event-free by definition. This immortal person-time is then falsely included in the denominator (“pats are not at risk for death”). It is usually caused by defining cohort entry and exposure status after looking in the future, but one must always define cohort entry and exposure status before follow-up time starts to minimize this bias. This presentation describes how best to account for immortal time bias in CER.

You may learn more about this topic from the following articles:
1.Suissa S. Immortal time bias in pharmaco-epidemiology. Am J Epidemiol. 2008 Feb 15;167(4):492-9. Epub 2007 Dec 3. Review. PubMed PMID: 18056625. http://aje.oxfordjournals.org/content/167/4/492.long
2. Suissa S. Immortal time bias in observational studies of drug effects. Pharmacoepidemiol Drug Saf. 2007 Mar;16(3):241-9. PubMed PMID: 17252614.


Matrix Design: Multiple exposures and multiple outcomes
Jeremy Rassen, ScD
Brigham & Women’s Hospital
Presentation Date: October 10, 2010
Slides: http://www.drugepi.org/wp-content/uploads/2013/02/10-12-2010_Methods-Matrix-Design_Rassen_Webinar1.pdf

The matrix design is a relevant tool for studies of comparative safety and effectiveness when a study has multiple exposures and multiple outcomes. Propensity scores are useful in some situations, but there are alternatives to matching. This presentation describes the use of the matrix design in CER and how it can effectively be used in conjunction with other methods, such as hd-PS.