The DEcIDE Methods Center publishes a monthly literature scan of current articles of interest to the field of comparative effectiveness research.

You can find them all here.

November 2011

CER Scan [Epub ahead of print]
     

     

    1. Clin Pharmacol Ther. 2011 Nov 2. doi: 10.1038/clpt.2011.235. [Epub ahead of print]

    Assessing the Comparative Effectiveness of Newly Marketed Medications: Methodological Challenges and Implications for Drug Development. Schneeweiss S, Gagne JJ, Glynn RJ, Ruhl M, Rassen JA. Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA

    Comparative-effectiveness research (CER) aims to produce actionable evidence regarding the effectiveness and safety of medical products and interventions as they are used outside of controlled research settings. Although CER evidence regarding medications is particularly needed shortly after market approval, key methodological challenges include (i) potential bias due to channeling of patients to the newly marketed medication because of various patient-, physician-, and system-related factors; (ii) rapid changes in the characteristics of the user population during the early phase of marketing; and (iii) lack of timely data and the often small number of users in the first few months of marketing. We propose a mix of approaches to generate comparative-effectiveness data in the early marketing period, including sequential cohort monitoring with secondary health-care data and propensity score (PS) balancing, as well as extended follow-up of phase III and phase IV trials, indirect comparisons of placebo-controlled trials, and modeling and simulation of virtual trials.
    PMID: 22048230 [PubMed – as supplied by publisher]

    2. Ann Epidemiol. 2011 Oct 28. [Epub ahead of print]

    Antidepressant Use and Cognitive Deficits in Older Men: Addressing Confounding by Indications with Different Methods. Han L, Kim N, Brandt C, Allore HG. Yale University Internal Medicine Program on Aging, New Haven, CT.

    PURPOSE: Antidepressant use has been associated with cognitive impairment in older persons. We sought to examine whether this association might reflect an indication bias.

    METHODS: A total of 544 community-dwelling hypertensive men aged =65 years completed the Hopkins Verbal Learning Test at baseline and 1 year. Antidepressant medications were ascertained by the use of medical records. Potential confounding by indications was examined by adjusting for depression-related diagnoses and severity of depression symptoms using multiple linear regression, a propensity score, and a structural equation model (SEM).

    RESULTS: Before adjusting for the indications, a one unit cumulative exposure to antidepressants was associated with -1.00 (95% confidence interval [CI], -1.94, -0.06) point lower HVLT score. After adjusting for the indications using multiple linear regression or a propensity score, the association diminished to -0.48 (95% CI, -0.62, 1.58) and -0.58 (95% CI, -0.60, 1.58), respectively. The most clinical interpretable empirical SEM with adequate fit involves both direct and indirect paths of the two indications. Depression-related diagnoses and depression symptoms significantly predict antidepressant use (p < .05). Their total standardized path coefficients on Hopkins Verbal Learning Test score were twice (0.073) or as large (0.034) as the antidepressant use (0.035). CONCLUSION: The apparent association between antidepressant use and memory deficit in older persons may be confounded by indications. SEM offers a heuristic empirical method for examining confounding by indications but not quantitatively superior bias reduction compared with conventional methods.
    PMID: 22037381 [PubMed – as supplied by publisher]

    3. Stat Methods Med Res. 2011 Oct 19. [Epub ahead of print]

    Observational data for comparative effectiveness research: An emulation of randomised trials of statins and primary prevention of coronary heart disease. Danaei G, García Rodríguez LA, Cantero OF, Logan R, Hernán MA. Department of Epidemiology, Harvard School of Public Health, Boston, MA.

    This article reviews methods for comparative effectiveness research using observational data. The basic idea is using an observational study to emulate a hypothetical randomised trial by comparing initiators versus non-initiators of treatment. After adjustment for measured baseline confounders, one can then conduct the observational analogue of an intention-to-treat analysis. We also explain two approaches to conduct the analogues of per-protocol and as-treated analyses after further adjusting for measured time-varying confounding and selection bias using inverse-probability weighting. As an example, we implemented these methods to estimate the effect of statins for primary prevention of coronary heart disease (CHD) using data from electronic medical records in the UK. Despite strong confounding by indication, our approach detected a potential benefit of statin therapy. The analogue of the intention-to-treat hazard ratio (HR) of CHD was 0.89 (0.73, 1.09) for statin initiators versus non-initiators. The HR of CHD was 0.84 (0.54, 1.30) in the per-protocol analysis and 0.79 (0.41, 1.41) in the as-treated analysis for 2 years of use versus no use. In contrast, a conventional comparison of current users versus never users of statin therapy resulted in a HR of 1.31 (1.04, 1.66). We provide a flexible and annotated SAS program to implement the proposed analyses.
    PMID: 22016461 [PubMed – as supplied by publisher]

    4. Clin Trials. 2011 Oct 12. [Epub ahead of print]

    Challenges in the design and implementation of the Multicenter Uveitis Steroid Treatment (MUST) Trial – lessons for comparative effectiveness trials. Holbrook JT, Kempen JH, Prusakowski NA, Altaweel MM, Jabs DA. Center for Clinical Trials, Department of Epidemiology, Johns Hopkins University Bloomberg School of Public Health, Baltimore, MD, USA.

    BACKGROUND: Randomized clinical trials (RCTs) are an important component of comparative effectiveness (CE) research because they are the optimal design for head-to-head comparisons of different treatment options.

    PURPOSE: To describe decisions made in the design of the Multicenter Uveitis Steroid Treatment (MUST) Trial to ensure that the results would be widely generalizable.

    METHODS: Review of design and implementation decisions and their rationale for the trial.

    RESULTS: The MUST Trial is a multicenter randomized controlled CE trial evaluating a novel local therapy (intraocular fluocinolone acetonide implant) versus the systemic therapy standard of care for noninfectious uveitis. Decisions made in protocol design in order to broaden enrollment included allowing patients with very poor vision and media opacity to enroll and including clinical sites outside the United States. The treatment protocol was designed to follow standard care. The primary outcome, visual acuity, is important to patients and can be evaluated in all eyes with uveitis. Other outcomes include patient-reported visual function, quality of life, and disease and treatment related complications.

    LIMITATIONS: The trial population is too small for subgroup analyses that are of interest and the trial is being conducted at tertiary medical centers.

    CONCLUSION: CE trials require greater emphasis on generalizability than many RCTs but otherwise face similar challenges for design choices as any RCT. The increase in heterogeneity in patients and treatment required to ensure generalizability can be balanced with a rigorous approach to implementation, outcome assessment, and statistical design. This approach requires significant resources that may limit implementation in many RCTs, especially in clinical practice settings. Clinical Trials 2011; XX: 1-8. http://ctj.sagepub.com.
    PMID: 21994128 [PubMed – as supplied by publisher]

    5. Stat Methods Med Res. 2011 Oct 3. [Epub ahead of print]

    Assessing the sensitivity of methods for estimating principal causal effects. Stuart EA, Jo B. Departments of Mental Health and Biostatistics, Johns Hopkins Bloomberg School of Public Health, 624 N Broadway, 8th Floor, Baltimore, MD, USA.

    The framework of principal stratification provides a way to think about treatment effects conditional on post-randomization variables, such as level of compliance. In particular, the complier average causal effect (CACE) – the effect of the treatment for those individuals who would comply with their treatment assignment under either treatment condition – is often of substantive interest. However, estimation of the CACE is not always straightforward, with a variety of estimation procedures and underlying assumptions, but little advice to help
    researchers select between methods. In this article, we discuss and examine two methods that rely on very different assumptions to estimate the CACE: a maximum likelihood (‘joint’) method that assumes the ‘exclusion restriction,’ (ER) and a propensity score-based method that relies on ‘principal ignorability.’ We detail the assumptions underlying each approach, and assess each methods’ sensitivity to both its own assumptions and those of the other method using both simulated data and a motivating example. We find that the ER-based joint approach appears somewhat less sensitive to its assumptions, and that the performance of both methods is significantly improved when there are strong predictors of compliance. Interestingly, we also find that each method performs particularly well when the assumptions of the other approach are violated. These results highlight the importance of carefully selecting an estimation procedure whose assumptions are likely to be satisfied in practice and of having strong predictors of principal stratum membership.
    PMID: 21971481 [PubMed – as supplied by publisher]

    CER Scan [published within the last 30 days]

     

    1. Am J Epidemiol. 2011 Nov 15;174(10):1204-10. Epub 2011 Oct 7.

    Comparing different strategies for timing of dialysis initiation through inverse probability weighting. Sjölander A, Nyrén O, Bellocco R, Evans M.

    Dialysis has been used in the treatment of patients with end-stage renal disease since the 1960s. Recently, several large observational studies have been conducted to assess whether early initiation of dialysis prolongs survival, as compared with late initiation. However, these studies have used analytic approaches which are likely to suffer from either lead-time bias or immortal-time bias. In this paper, the authors demonstrate that recently developed methods in the causal inference literature can be used to avoid both types of bias and accurately estimate the ideal time for dialysis initiation from observational data. This is illustrated using data from a nationwide population-based cohort of patients with chronic kidney disease in Sweden (1996-2003).
    PMID: 21984655 [PubMed – in process]

    2. BMJ. 2011 Oct 3;343:d5888. doi: 10.1136/bmj.d5888.

    Estimating treatment effects for individual patients based on the results of randomised clinical trials. Dorresteijn JA, Visseren FL, Ridker PM, Wassink AM, Paynter NP, Steyerberg EW, van der Graaf Y, Cook NR. Department of Vascular Medicine, University Medical Center Utrecht, PO Box 85500, 3508 GA Utrecht, Netherlands.

    OBJECTIVES: To predict treatment effects for individual patients based on data from randomised trials, taking rosuvastatin treatment in the primary prevention of cardiovascular disease as an example, and to evaluate the net benefit of making treatment decisions for individual patients based on a predicted absolute treatment effect.

    SETTING: As an example, data were used from the Justification for the Use of Statins in Prevention (JUPITER) trial, a randomised controlled trial evaluating the effect of rosuvastatin 20 mg daily versus placebo on the occurrence of
    cardiovascular events (myocardial infarction, stroke, arterial revascularisation, admission to hospital for unstable angina, or death from cardiovascular causes). Population 17,802 healthy men and women who had low density lipoprotein cholesterol levels of less than 3.4 mmol/L and high sensitivity C reactive protein levels of 2.0 mg/L or more.

    METHODS: Data from the Justification for the Use of Statins in Prevention trial were used to predict rosuvastatin treatment effect for individual patients based on existing risk scores (Framingham and Reynolds) and on a newly developed prediction model. We compared the net benefit of prediction based rosuvastatin treatment (selective treatment of patients whose predicted treatment effect exceeds a decision threshold) with the net benefit of treating either everyone or no one.

    RESULTS: The median predicted 10 year absolute risk reduction for cardiovascular events was 4.4% (interquartile range 2.6-7.0%) based on the Framingham risk score, 4.2% (2.5-7.1%) based on the Reynolds score, and 3.9% (2.5-6.1%) based on the newly developed model (optimal fit model). Prediction based treatment was associated with more net benefit than treating everyone or no one, provided that the decision threshold was between 2% and 7%, and thus that the number willing to treat (NWT) to prevent one cardiovascular event over 10 years was between 15 and 50.

    CONCLUSIONS: Data from randomised trials can be used to predict treatment effect in terms of absolute risk reduction for individual patients, based on a newly developed model or, if available, existing risk scores. The value of such prediction of treatment effect for medical decision making is conditional on the NWT to prevent one outcome event. Trial registration number Clinicaltrials.gov NCT00239681.
    PMCID: PMC3184644
    PMID: 21968126 [PubMed – in process]

    Free Full Text: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3184644/?tool=pubmed

    3. BMC Med Res Methodol. 2011 Sep 21;11:132.

    Benefits of ICU admission in critically ill patients: whether instrumental variable methods or propensity scores should be used. Pirracchio R, Sprung C, Payen D, Chevret S. Département de Biostatistique et Informatique Médicale, Unité INSERM UMR 717, Hôpital Saint Louis, APHP, Paris, 75010, France. romainpirracchio@yahoo.fr

    BACKGROUND: The assessment of the causal effect of Intensive Care Unit (ICU) admission generally involves usual observational designs and thus requires controlling for confounding variables. Instrumental variable analysis is an econometric technique that allows causal inferences of the effectiveness of some treatments during situations to be made when a randomized trial has not been or cannot be conducted. This technique relies on the existence of one variable or “instrument” that is supposed to achieve similar observations with a different treatment for “arbitrary” reasons, thus inducing substantial variation in the treatment decision with no direct effect on the outcome. The objective of the study was to assess the benefit in terms of hospital mortality of ICU admission in a cohort of patients proposed for ICU admission (ELDICUS cohort).

    METHODS: Using this cohort of 8,201 patients triaged for ICU (including 6,752 (82.3%) patients admitted), the benefit of ICU admission was evaluated using 3 different approaches: instrumental variables, standard regression and propensity score matched analyses. We further evaluated the results obtained using different instrumental variable methods that have been proposed for dichotomous outcomes.

    RESULTS: The physician’s main specialization was found to be the best instrument. All instrumental variable models adequately reduced baseline imbalances, but failed to show a significant effect of ICU admission on hospital mortality, with confidence intervals far higher than those obtained in standard or propensity-based analyses.

    CONCLUSIONS: Instrumental variable methods offer an appealing alternative to handle the selection bias related to nonrandomized designs, especially when the presence of significant unmeasured confounding is suspected. Applied to the ELDICUS database, this analysis failed to show any significant beneficial effect of ICU admission on hospital mortality. This result could be due to the lack of statistical power of these methods.
    PMCID: PMC3185268
    PMID: 21936926 [PubMed – in process]

    Free Full Text: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3185268/?tool=pubmed

    4. Med Care. 2011 Oct;49(10):940-7.

    The mortality risk score and the ADG score: two points-based scoring systems for the johns hopkins aggregated diagnosis groups to predict mortality in a general adult population cohort in Ontario, Canada. Austin PC, Walraven C. Institute for Clinical Evaluative Sciences, Toronto, Ontario, Canada. peter.austin@ices.on.ca

    BACKGROUND: Logistic regression models that incorporated age, sex, and indicator variables for the Johns Hopkins’ Aggregated Diagnosis Groups (ADGs) categories have been shown to accurately predict all-cause mortality in adults.

    OBJECTIVES: To develop 2 different point-scoring systems using the ADGs. The Mortality Risk Score (MRS) collapses age, sex, and the ADGs to a single summary score that predicts the annual risk of all-cause death in adults. The ADG Score derives weights for the individual ADG diagnosis groups.

    RESEARCH DESIGN: Retrospective cohort constructed using population-based administrative data.

    PARTICIPANTS: All 10,498,413 residents of Ontario, Canada, between the age of 20 and 100 years who were alive on their birthday in 2007, participated in this study. Participants were randomly divided into derivation and validation samples.

    MEASURES: Death within 1 year.

    RESULTS: In the derivation cohort, the MRS ranged from -21 to 139 (median value 29, IQR 17 to 44). In the validation group, a logistic regression model with the MRS as the sole predictor significantly predicted the risk of 1-year mortality with a c-statistic of 0.917. A regression model with age, sex, and the ADG Score has similar performance. Both methods accurately predicted the risk of 1-year mortality across the 20 vigintiles of risk.

    CONCLUSIONS: The MRS combined values for a person’s age, sex, and the John Hopkins ADGs to accurately predict 1-year mortality in adults. The ADG Score is a weighted score representing the presence or absence of the 32 ADG diagnosis groups. These scores will facilitate health services researchers conducting risk adjustment using administrative health care databases.
    PMID: 21921849 [PubMed – in process]

    5. Stat Med. 2011 Oct 30;30(24):2947-58. doi: 10.1002/sim.4324. Epub 2011 Jul 29.

    Analyzing direct and indirect effects of treatment using dynamic path analysis applied to data from the Swiss HIV Cohort Study. Røysland K, Gran JM, Ledergerber B, von Wyl V, Young J, Aalen OO. Department of Biostatistics, Institute of Basic Medical Sciences, University of Oslo, Norway. kjetil.roysland@medisin.uio.no

    When applying survival analysis, such as Cox regression, to data from major clinical trials or other studies, often only baseline covariates are used. This is typically the case even if updated covariates are available throughout the observation period, which leaves large amounts of information unused. The main reason for this is that such time-dependent covariates often are internal to the disease process, as they are influenced by treatment, and therefore lead to confounded estimates of the treatment effect. There are, however, methods to exploit such covariate information in a useful way. We study the method of dynamic path analysis applied to data from the Swiss HIV Cohort Study. To adjust for time-dependent confounding between treatment and the outcome ‘AIDS or death’, we carried out the analysis on a sequence of mimicked randomized trials constructed from the original cohort data. To analyze these trials together, regular dynamic path analysis is extended to a composite analysis of weighted dynamic path models. Results using a simple path model, with one indirect effect mediated through current HIV-1 RNA level, show that most or all of the total effect go through HIV-1 RNA for the first 4?years. A similar model, but with CD4 level as mediating variable, shows a weaker indirect effect, but the results are in the same direction. There are many reasons to be cautious when drawing conclusions from estimates of direct and indirect effects. Dynamic path analysis is however a useful tool to explore underlying processes, which are ignored in regular analyses.
    PMID: 21800346 [PubMed – in process]

    6. Epidemiology. 2011 Sep;22(5):718-23.
    A comparison of methods to estimate the hazard ratio under conditions of time-varying confounding and nonpositivity. Naimi AI, Cole SR, Westreich DJ, Richardson DB. Department of Epidemiology, Gillings School of Global Public Health, UNC-Chapel Hill, NC 27599, USA.

    In occupational epidemiologic studies, the healthy worker survivor effect refers to a process that leads to bias in the estimates of an association between cumulative exposure and a health outcome. In these settings, work status acts both as an intermediate and confounding variable and may violate the positivity assumption (the presence of exposed and unexposed observations in all strata of the confounder). Using Monte Carlo simulation, we assessed the degree to which crude, work-status adjusted, and weighted (marginal structural) Cox proportional hazards models are biased in the presence of time-varying confounding and nonpositivity. We simulated the data representing time-varying occupational exposure, work status, and mortality. Bias, coverage, and root mean squared error (MSE) were calculated relative to the true marginal exposure effect in a range of scenarios. For a base-case scenario, using crude, adjusted, and weighted Cox models, respectively, the hazard ratio was biased downward 19%, 9%, and 6%; 95% confidence interval coverage was 48%, 85%, and 91%; and root MSE was 0.20, 0.13, and 0.11. Although marginal structural models were less biased in most scenarios studied, neither standard nor marginal structural Cox proportional hazards models fully resolve the bias encountered under conditions of time-varying confounding and nonpositivity.
    PMCID: PMC3155387 [Available on 2012/9/1]
    PMID: 21747286 [PubMed – in process]

    CER Scan [published within the last 90 days]

    1. Stat Biosci. 2011 Sep;3(1):6-27.
    Estimating Decision-Relevant Comparative Effects Using Instrumental Variables. Basu A. Departments of Health Services and Pharmacy, University of Washington, Seattle, 1959 NE Pacific St, Box 357660, Seattle, WA 98195-7660, USA.

    Instrumental variables methods (IV) are widely used in the health economics literature to adjust for hidden selection biases in observational studies when estimating treatment effects. Less attention has been paid in the applied literature to the proper use of IVs if treatment effects are heterogeneous across subjects. Such a heterogeneity in effects becomes an issue for IV estimators when individuals’ self-selected choices of treatments are correlated with expected idiosyncratic gains or losses from treatments. We present an overview of the challenges that arise with IV estimators in the presence of effect heterogeneity and self-selection and compare conventional IV analysis with alternative approaches that use IVs to directly address these challenges. Using a Medicare sample of clinically localized breast cancer patients, we study the impact of breast-conserving surgery and radiation with mastectomy on 3-year survival rates. Our results reveal the traditional IV results may have masked important heterogeneity in treatment effects. In the context of these results, we discuss the advantages and limitations of conventional and alternative IV methods in estimating mean treatment-effect parameters, the role of heterogeneity in comparative effectiveness research and the implications for diffusion of technology.
    PMCID: PMC3193796 [Available on 2012/9/1]
    PMID: 22010051 [PubMed]

Share →