The DEcIDE Methods Center publishes a monthly literature scan of current articles of interest to the field of comparative effectiveness research.

You can find them all here.

CER Scan [Epub ahead of print]

  1. Stat Med. 2012 Mar 22. doi: 10.1002/sim.5312. [Epub ahead of print]
  2. Testing superiority at interim analyses in a non-inferiority trial. Joshua Chen YH, Chen C.
    Merck Research Laboratories, Rahway, NJ, PA, USA. Joshua_chen@merck.com.

    Shift in research and development strategy from developing follow-on or ‘me-too’ drugs to differentiated medical products with potentially better efficacy than the standard of care (e.g., first-in-class, best-in-class, and bio-betters) highlights the scientific and commercial interests in establishing superiority even when a non-inferiority design, adequately powered for a pre-specified non-inferiority margin, is appropriate for various reasons. In this paper, we propose a group sequential design to test superiority at interim analyses in a non-inferiority trial. We will test superiority at the interim analyses using conventional group sequential methods, and we may stop the study because of better efficacy. If the study fails to establish superior efficacy at the interim and final analyses, we will test the primary non-inferiority hypothesis at the final analysis at the nominal level without alpha adjustment. Whereas superiority/non-inferiority testing no longer has the hierarchical structure in which the rejection region for testing superiority is a subset of that for testing non-inferiority, the impact of repeated superiority tests on the false positive rate and statistical power for the primary non-inferiority test at the final analysis is essentially ignorable. For the commonly used O’Brien-Fleming type alpha-spending function, we show that the impact is extremely small based upon Brownian motion boundary-crossing properties. Numerical evaluation further supports the conclusion for other alpha-spending functions with a substantial amount of alpha being spent on the interim superiority tests. We use a clinical trial example to illustrate the proposed design.
    Copyright © 2012 John Wiley & Sons, Ltd. Copyright
    PMID: 22438208  [PubMed – as supplied by publisher]

  3. Am J Epidemiol. 2012 Mar 6. [Epub ahead of print]
  4. Risk Prediction Measures for Case-Cohort and Nested Case-Control Designs: An Application to Cardiovascular Disease. Ganna A, Reilly M, de Faire U, Pedersen N, Magnusson P, Ingelsson E.

    Case-cohort and nested case-control designs are often used to select an appropriate subsample of individuals from prospective cohort studies. Despite the great attention that has been given to the calculation of association estimators, no formal methods have been described for estimating risk prediction measures from these 2 sampling designs. Using real data from the Swedish Twin Registry (2004-2009), the authors sampled unstratified and stratified (matched) case-cohort and nested case-control subsamples and compared them with the full cohort (as "gold standard"). The real biomarker (high density lipoprotein cholesterol) and simulated biomarkers (BIO1 and BIO2) were studied in terms of association with cardiovascular disease, individual risk of cardiovascular disease at 3 years, and main prediction metrics. Overall, stratification improved efficiency, with stratified case-cohort designs being comparable to matched nested case-control designs. Individual risks and prediction measures calculated by using case-cohort and nested case-control designs after appropriate reweighting could be assessed with good efficiency, except for the finely matched nested case-control design, where matching variables could not be included in the individual risk estimation. In conclusion, the authors have shown that case-cohort and nested case-control designs can be used in settings where the research aim is to evaluate the prediction ability of new markers and that matching strategies for nested case-control designs may lead to biased prediction measures.
    PMID: 22396388  [PubMed – as supplied by publisher]

  5. Lifetime Data Anal. 2012 Mar 2. [Epub ahead of print]
  6. Comparison of estimators in nested case-control studies with multiple outcomes. Støer NC, Samuelsen SO. Department of Mathematics, University of Oslo, P.O. Box 1053, 0316, Oslo, Norway, nathalcs@math.uio.no.
    Reuse of controls in a nested case-control (NCC) study has not been considered feasible since the controls are matched to their respective cases. However, in the last decade or so, methods have been developed that break the matching and allow for analyses where the controls are no longer tied to their cases. These methods can be divided into two groups; weighted partial likelihood (WPL) methods and full maximum likelihood methods. The weights in the WPL can be estimated in different ways and four estimation procedures are discussed. In addition, we address modifications needed to accommodate left truncation. A full likelihood approach is also presented and we suggest an aggregation technique to decrease the computation time. Furthermore, we generalize calibration for case-cohort designs to NCC studies. We consider a competing risks situation and compare WPL, full likelihood and calibration through simulations and analyses on a real data example.
    PMID: 22382602  [PubMed – as supplied by publisher]

  7. Stat Methods Med Res. 2012 Feb 23. [Epub ahead of print]
  8. Consistent causal effect estimation under dual misspecification and implications for confounder selection procedures. Gruber S, van der Laan MJ. Department of Epidemiology, Harvard School of Public Health, 677 Huntington Avenue, Kresge 820, Boston, MA, USA.

    In a previously published article in this journal, Vansteeland et al. [Stat Methods Med Res. Epub ahead of print 12 November 2010. DOI: 10.1177/0962280210387717] address confounder selection in the context of causal effect estimation in observational studies. They discuss several selection strategies and propose a procedure whose performance is guided by the quality of the exposure effect estimator. The authors note that when a particular linearity condition is met, consistent estimation of the target parameter can be achieved even under dual misspecification of models for the association of confounders with exposure and outcome and demonstrate the performance of their procedure relative to other estimators when this condition holds. Our earlier published work on collaborative targeted minimum loss based learning provides a general theoretical framework for effective confounder selection that explains the findings of Vansteelandt et al. and underscores the appropriateness of their suggestions that a confounder selection procedure should be concerned with directly targeting the quality of the estimate and that desirable estimators produce valid confidence intervals and are robust to dual misspecification.
    PMID: 22368176  [PubMed – as supplied by publisher]

  9. Stat Med. 2012 Feb 24. doi: 10.1002/sim.4504. [Epub ahead of print]
  10. Variance estimation for stratified propensity score estimators. Williamson EJ, Morley R, Lucas A, Carpenter JR. Centre for MEGA Epidemiology, School of Population Health, University of Melbourne, Melbourne, Australia; Department of Epidemiology and Preventive Medicine, Monash University, Melbourne, Australia. ewi@unimelb.edu.au.

    Propensity score methods are increasingly used to estimate the effect of a treatment or exposure on an outcome in non-randomised studies. We focus on one such method, stratification on the propensity score, comparing it with the method of inverse-probability weighting by the propensity score. The propensity score-the conditional probability of receiving the treatment given observed covariates-is usually an unknown probability estimated from the data. Estimators for the variance of treatment effect estimates typically used in practice, however, do not take into account that the propensity score itself has been estimated from the data. By deriving the asymptotic marginal variance of the stratified estimate of treatment effect, correctly taking into account the estimation of the propensity score, we show that routinely used variance estimators are likely to produce confidence intervals that are too conservative when the propensity score model includes variables that predict (cause) the outcome, but only weakly predict the treatment. In contrast, a comparison with the analogous marginal variance for the inverse probability weighted (IPW) estimator shows that routinely used variance estimators for the IPW estimator are likely to produce confidence intervals that are almost always too conservative. Because exact calculation of the asymptotic marginal variance is likely to be complex, particularly for the stratified estimator, we suggest that bootstrap estimates of variance should be used in practice. Copyright © 2012 John Wiley & Sons, Ltd.
    PMID: 22362427  [PubMed – as supplied by publisher]

  11. Health Serv Res. 2012 Feb 21. doi: 10.1111/j.1475-6773.2012.01387.x. [Epub ahead of print]
  12. Measuring Racial/Ethnic Disparities in Health Care: Methods and Practical Issues. Cook BL, McGuire TG, Zaslavsky AM. Department of Psychiatry, Center for Multicultural Mental Health Research, Harvard Medical School, Somerville, MA.

    OBJECTIVE: To review methods of measuring racial/ethnic health care disparities. STUDY DESIGN: Identification and tracking of racial/ethnic disparities in health care will be advanced by application of a consistent definition and reliable empirical methods. We have proposed a definition of racial/ethnic health care disparities based in the Institute of Medicine’s (IOM) Unequal Treatment report, which defines disparities as all differences except those due to clinical need and preferences. After briefly summarizing the strengths and critiques of this definition, we review methods that have been used to implement it. We discuss practical issues that arise during implementation and expand these methods to identify sources of disparities. We also situate the focus on methods to measure racial/ethnic health care disparities (an endeavor predominant in the United States) within a larger international literature in health outcomes and health care inequality. EMPIRICAL APPLICATION: We compare different methods of implementing the IOM definition on measurement of disparities in any use of mental health care and mental health care expenditures using the 2004-2008 Medical Expenditure Panel Survey. CONCLUSION: Disparities analysts should be aware of multiple methods available to measure disparities and their differing assumptions. We prefer a method concordant with the IOM definition. © Health Research and Educational Trust.
    PMID: 22353147  [PubMed – as supplied by publisher]

CER Scan [published within the last 30 days]

  1. Emerg Themes Epidemiol. 2012 Mar 19;9(1):1. [Epub ahead of print]
  2. Causal diagrams in systems epidemiology. Joffe M, Gambhir M, Chadeau-Hyam M, Vineis P.

    Methods of diagrammatic modelling have been greatly developed in the past two decades. Outside the context of infectious diseases, systematic use of diagrams in epidemiology has been mainly confined to the analysis of a single link: that between a disease outcome and its proximal determinant(s). Transmitted causes ("causes of causes") tend not to be systematically analysed. The infectious disease epidemiology modelling tradition models the human population in its environment, typically with the exposure-health relationship and the determinants of exposure being considered at individual and group/ecological levels, respectively. Some properties of the resulting systems are quite general, and are seen in unrelated contexts such as biochemical pathways. Confining analysis to a single link misses the opportunity to discover such properties. The structure of a causal diagram is derived from knowledge about how the world works, as well as from statistical evidence. A single diagram can be used to
    characterise a whole research area, not just a single analysis – although this depends on the degree of consistency of the causal relationships between different populations – and can therefore be used to integrate multiple datasets. Additional advantages of system-wide models include: the use of instrumental variables – now emerging as an important technique in epidemiology in the context of mendelian randomisation, but under-used in the exploitation of "natural experiments"; the explicit use of change models, which have advantages with respect to inferring causation; and in the detection and elucidation of feedback.
    PMID: 22429606  [PubMed – as supplied by publisher]

  3. Pharmacoepidemiol Drug Saf. 2012 Mar;21(3):241-45. doi: DOI: 10.1002/pds.2306.
  4. Subtle issues in model specification and estimation of marginal structural models. Yang W, Joffe MM.

    We review the concept of time-dependent confounding by using the example in paper “Comparative effectiveness of individual angiotensin receptor blockers on risk of mortality in patients with chronic heart failure” by Desai et al. and illustrate how to adjust for it by using inverse probability of treatment weighting through a simulated example. We discuss a few subtle issues that arise in specification of the model for treatment required to fit marginal structural models (MSMs) and in specification of the structural model for the outcome. We discuss the differences between the effects estimated in MSMs and intention-to-treat effects estimated in randomized trials, followed by an outline of some limitations of MSMs. Copyright © 2012 John Wiley & Sons, Ltd.

    Comment on:
    Pharmacoepidemiol Drug Saf. 2012 Mar;21(3):233-40. doi: 10.1002/pds.2175. Epub 2011 Jul 22.
    Comparative effectiveness of individual angiotensin receptor blockers on risk of mortality in patients with chronic heart failure. Desai RJ, Ashton CM, Deswal A, Morgan RO, Mehta HB, Chen H, Aparasu RR, Johnson ML. Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC, USA.

    OBJECTIVE: There is little evidence on comparative effectiveness of individual angiotensin receptor blockers (ARBs) in patients with chronic heart failure (CHF). This study compared four ARBs in reducing risk of mortality in clinical practice.
    METHODS: A retrospective analysis was conducted on a national sample of patients diagnosed with CHF from 1 October 1996 to 30 September 2002 identified from Veterans Affairs electronic medical records, with supplemental clinical data obtained from chart review. After excluding patients with exposure to ARBs within the previous 6 months, four treatment groups were defined based on initial use of
    candesartan, valsartan, losartan, and irbesartan between the index date (1 October 2000) and the study end date (30 September 2002). Time to death was measured concurrently during that period. A marginal structural model controlled for sociodemographic factors, comorbidities, comedications, disease severity (left ventricular ejection fraction), and potential time-varying confounding affected by previous treatment (hospitalization). Propensity scores derived from a multinomial logistic regression were used as inverse probability of treatment weights in a generalized estimating equation to estimate causal effects.
    RESULTS: Among the 1536 patients identified on ARB therapy, irbesartan was most frequently used (55.21%), followed by losartan (21.74%), candesartan (15.23%), and valsartan (7.81%). When compared with losartan, after adjusting for time-varying hospitalization in marginal structural model, candesartan (OR=0.79, 95%CI=0.42-1.50), irbesartan (OR=1.17, 95%CI=0.72-1.90), and valsartan (OR=0.98, 95%CI=0.45-2.14) were found to have similar effectiveness in reducing mortality in CHF patients.
    CONCLUSION: Effectiveness of ARBs in reducing mortality is similar in patients with CHF in everyday clinical practice. Copyright © 2011 John Wiley & Sons, Ltd.
    PMID: 21786364  [PubMed – in process]

Share →