The DEcIDE Methods Center publishes a monthly literature scan of current articles of interest to the field of comparative effectiveness research.
You can find them all here.
CER Scan [Epub ahead of print]
- Stat Med. 2012 Feb 17. doi: 10.1002/sim.4510. [Epub ahead of print]
- Biometrics. 2012 Feb 2. doi: 10.1111/j.1541-0420.2011.01722.x. [Epub ahead of print]
Longitudinal structural mixed models for the analysis of surgical trials with noncompliance. Sitlani CM, Heagerty PJ, Blood EA, Tosteson TD. Department of Biostatistics, University of Washington, F-600 Health Sciences Building, Box 357232, Seattle, WA 98195, USA; Cardiovascular Health Research Unit, University of Washington, 1730 Minor Ave, Suite 1360, Box 358085, Seattle, WA. firstname.lastname@example.org.
Patient noncompliance complicates the analysis of many randomized trials seeking to evaluate the effect of surgical intervention as compared with a nonsurgical treatment. If selection for treatment depends on intermediate patient characteristics or outcomes, then ‘as-treated’ analyses may be biased for the estimation of causal effects. Therefore, the selection mechanism for treatment and/or compliance should be carefully considered when conducting analysis of surgical trials. We compare the performance of alternative methods when endogenous processes lead to patient crossover. We adopt an underlying longitudinal structural mixed model that is a natural example of a structural nested model. Likelihood-based methods are not typically used in this context; however, we show that standard linear mixed models will be valid under selection mechanisms that depend only on past covariate and outcome history. If there are underlying patient characteristics that influence selection, then likelihood methods can be extended via maximization of the joint likelihood of exposure and outcomes. Semi-parametric causal estimation methods such as marginal structural models, g-estimation, and instrumental variable approaches can also be valid, and we both review and evaluate their implementation in this setting. The assumptions required for valid estimation vary across approaches; thus, the choice of methods for analysis should be driven by which outcome and selection assumptions are plausible. Copyright © 2012 John Wiley & Sons, Ltd.
PMID: 22344923 [PubMed – as supplied by publisher]
Assessing Treatment-Selection Markers using a Potential Outcomes Framework.
Huang Y, Gilbert PB, Janes H. Fred Hutchinson Cancer Research Center, Seattle, Washington, 98109, U.S.A. Department of Biostatistics, University of Washington, Seattle, WA
Summary Treatment-selection markers are biological molecules or patient characteristics associated with one’s response to treatment. They can be used to predict treatment effects for individual subjects and subsequently help deliver treatment to those most likely to benefit from it. Statistical tools are needed to evaluate a marker’s capacity to help with treatment selection. The commonly adopted criterion for a good treatment-selection marker has been the interaction between marker and treatment. While a strong interaction is important, it is, however, not sufficient for good marker performance. In this article, we develop novel measures for assessing a continuous treatment-selection marker, based on a potential outcomes framework. Under a set of assumptions, we derive the optimal decision rule based on the marker to classify individuals according to treatment benefit, and characterize the marker’s performance using the corresponding classification accuracy as well as the overall distribution of the classifier. We develop a constrained maximum-likelihood method for estimation and testing in a randomized trial setting. Simulation studies are conducted to demonstrate the performance of our methods. Finally, we illustrate the methods using an HIV vaccine trial where we explore the value of the level of preexisting immunity to adenovirus serotype 5 for predicting a vaccine-induced increase in the risk of HIV acquisition. © 2012, The nternational Biometric Society.
PMID: 22299708 [PubMed – as supplied by publisher]
CER Scan [published within the last 30 days]
- Am J Epidemiol. 2012 Feb 1;175(3):210-7. Epub 2011 Dec 23.
- Stat Med. 2012 Feb 20;31(4):383-96. doi: 10.1002/sim.4453.
- Am J Epidemiol. 2012 Mar 1;175(5):368-75. Epub 2012 Feb 3.
- Med Care. 2012 Feb;50(2):109-16.
Dealing with missing outcome data in randomized trials and observational studies.
Groenwold RH, Donders AR, Roes KC, Harrell FE Jr, Moons KG. Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, the Netherlands. email@example.com
Although missing outcome data are an important problem in randomized trials and observational studies, methods to address this issue can be difficult to apply. Using simulated data, the authors compared 3 methods to handle missing outcome data: 1) complete case analysis; 2) single imputation; and 3) multiple imputation (all 3 with and without covariate adjustment). Simulated scenarios focused on continuous or dichotomous missing outcome data from randomized trials or observational studies. When outcomes were missing at random, single and multiple imputations yielded unbiased estimates after covariate adjustment. Estimates obtained by complete case analysis with covariate adjustment were unbiased as well, with coverage close to 95%. When outcome data were missing not at random, all methods gave biased estimates, but handling missing outcome data by means of 1 of the 3 methods reduced bias compared with a complete case analysis without covariate adjustment. Complete case analysis with covariate adjustment and multiple imputation yield similar estimates in the event of missing outcome data, as long as the same predictors of missingness are included. Hence, complete case analysis with covariate adjustment can and should be used as the analysis of choice more often. Multiple imputation, in addition, can accommodate the missing-not-at-random scenario more flexibly, making it especially suited for sensitivity analyses.
PMID: 22262640 [PubMed – in process]
Hierarchical priors for bias parameters in Bayesian sensitivity analysis for unmeasured confounding. McCandless LC, Gustafson P, Levy AR, Richardson S.
Faculty of Health Sciences, Simon Fraser University, Burnaby, BC V5A 1S6, Canada. firstname.lastname@example.org
Recent years have witnessed new innovation in Bayesian techniques to adjust for unmeasured confounding. A challenge with existing methods is that the user is often required to elicit prior distributions for high-dimensional parameters that model competing bias scenarios. This can render the methods unwieldy. In this paper, we propose a novel methodology to adjust for unmeasured confounding that derives default priors for bias parameters for observational studies with binary covariates. The confounding effects of measured and unmeasured variables are treated as exchangeable within a Bayesian framework. We model the joint distribution of covariates by using a log-linear model with pairwise interaction terms. Hierarchical priors constrain the magnitude and direction of bias parameters. An appealing property of the method is that the conditional distribution of the unmeasured confounder follows a logistic model, giving a simple equivalence with previously proposed methods. We apply the method in a data example from pharmacoepidemiology and explore the impact of different priors for bias parameters on the analysis results. Copyright © 2011 John Wiley & Sons, Ltd.
PMID: 22253142 [PubMed – in process]
Bayesian posterior distributions without markov chains. Cole SR, Chu H, Greenland S, Hamra G, Richardson DB.
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976-1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of
315 human immunodeficiency virus seroconverters (1984-1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.
PMCID: PMC3282880 [Available on 2013/3/1] PMID: 22306565 [PubMed – in process]
A longitudinal examination of a pay-for-performance program for diabetes care: evidence from a natural experiment. Cheng SH, Lee TT, Chen CC. Institute of Health Policy and Management, College of Public Health, National Taiwan University, Taiwan. email@example.com
BACKGROUND: Numerous studies have examined the impacts of pay-for-performance programs, yet little is known about their long-term effects on health care expenses.
OBJECTIVES: This study aimed to examine the long-term effects of a pay-for-performance program for diabetes care on health care utilization and expenses.
METHODS: This study represents a nationwide population-based natural experiment with a 4-year follow-up period under a compulsory universal health insurance program in Taiwan. The intervention groups consisted of 20,934 patients enrolled in the program in 2005, and 9694 patients continuously participated in the program for 4 years. Two comparison groups were selected by propensity score matching from patients seen by the same group of physicians. Generalized estimating equations were used to estimate differences-in-differences models to examine the effects of the pay-for-performance program.
RESULTS: Patients enrolled in the pay-for-performance program underwent significantly more diabetes specific examinations and tests after enrollment; the differences between the intervention and comparison groups declined gradually over time but remained significant. Patients in the intervention groups had a significantly higher number of diabetes-related physician visits in only the first year after enrollment and had fewer diabetes-related hospitalizations in the follow-up period. Concerning overall health care expenses, patients in the intervention groups spent more than the comparison group in the first year; however, the continual enrollees spent significantly less than their counterparts in the subsequent years.
CONCLUSIONS: The program seemed to achieve its primary goal in improving health care and providing long-term cost benefits.
PMID: 22249920 [PubMed – in process]
CER Scan [articles of interest published within the last 4 months]
- Value in Health [Available online 8 November 2011] DOI: 10.1016/j.jval.2011.08.1740
- Health Serv Outcomes Res Method. 2011; 11:95-114
Conducting Comparative Effectiveness Research on Medications: The Views of a Practicing Epidemiologist from the Other Washington. Bruce M. Psaty
Extending iterative matching methods: an approach to improving covariate balance that allows prioritisation. Ramsahai RR, Grieve R, Sekhon JS.
Comparative effectiveness studies can identify the causal effect of treatment if treatment is unconfounded with outcome conditional on a set of measured covariates. Matching aims to ensure that the covariate distributions are similar between treatment and control groups in the matched samples, and this should be done iteratively by checking and improving balance. However, an outstanding concern facing matching methods is how to prioritise competing improvements in balance across different covariates. We address this concern by developing a ‘loss function’ that an iterative matching method can minimise. Our ‘loss function’ is a transparent summary of covariate imbalance in a matched sample and follows general recommendations in prioritising balance amongst covariates. We illustrate this approach by extending Genetic Matching (GM), an automated approach to balance checking. We use the method to reanalyse a high profile comparative effectiveness study of right heart catheterisation. We find that our loss function improves covariate balance compared to a standard GM approach, and to matching on the published propensity score.