The DEcIDE Methods Center publishes a monthly literature scan of current articles of interest to the field of comparative effectiveness research.

You can find them all here.

August 2011


 

CER Scan [Epub ahead of print]
     

     

    1. Pharmacoepidemiol Drug Saf. 2011 Jul 29. doi: 10.1002/pds.2188. [Epub ahead of print]

    Measuring balance and model selection in propensity score methods. Belitser SV, Martens EP, Pestman WR, Groenwold RH, de Boer A, Klungel OH. Department of Pharmacoepidemiology and Pharmacotherapy, Utrecht Institute ofPharmaceutical Sciences, Utrecht University, Utrecht, Netherlands.

    PURPOSE: Propensity score (PS) methods focus on balancing confounders between groups to estimate an unbiased treatment or exposure effect. However, there is lack of attention in actually measuring, reporting and using the information on balance, for instance for model selection. We propose to use a measure for balance in PS methods and describe several of such measures: the overlapping coefficient, the Kolmogorov-Smirnov distance, and the Lévy distance.

    METHODS: We performed simulation studies to estimate the association between these three and several mean based measures for balance and bias (i.e., discrepancy between the true and the estimated treatment effect).

    RESULTS: For large sample sizes (n=2000) the average Pearson’s correlation coefficients between bias and Kolmogorov-Smirnov distance (r=0.89), the Lévy distance (r=0.89) and the absolute standardized mean difference (r=0.90) were similar, whereas this was lower for the overlapping coefficient (r=-0.42). When sample size decreased to 400, mean based measures of balance had stronger correlations with bias. Models including all confounding variables, their squares and interaction terms resulted in smaller bias than models that included only main terms for confounding variables.

    CONCLUSIONS: We conclude that measures for balance are useful for reporting the amount of balance reached in propensity score analysis and can be helpful in selecting the final PS model. Copyright © 2011 John Wiley & Sons, Ltd.

    PMID: 21805529 [PubMed – as supplied by publisher]

    2. Stat Med. 2011 Jul 29. doi: 10.1002/sim.4324. [Epub ahead of print]

    Analyzing direct and indirect effects of treatment using dynamic path analysis applied to data from the Swiss HIV Cohort Study. Røysland K, Gran JM, Ledergerber B, Wyl V, Young J, Aalen OO. Department of Biostatistics, Institute of Basic Medical Sciences, University of Oslo, Norway. kjetil.roysland@medisin.uio.no.

    When applying survival analysis, such as Cox regression, to data from major clinical trials or other studies, often only baseline covariates are used. This is typically the case even if updated covariates are available throughout the observation period, which leaves large amounts of information unused. The main reason for this is that such time-dependent covariates often are internal to the disease process, as they are influenced by treatment, and therefore lead to confounded estimates of the treatment effect. There are, however, methods to exploit such covariate information in a useful way. We study the method of dynamic path analysis applied to data from the Swiss HIV Cohort Study. To adjust for time-dependent confounding between treatment and the outcome ‘AIDS or death’, we carried out the analysis on a sequence of mimicked randomized trials constructed from the original cohort data. To analyze these trials together, regular dynamic path analysis is extended to a composite analysis of weighted dynamic path models. Results using a simple path model, with one indirect effect mediated through current HIV-1 RNA level, show that most or all of the total effect go through HIV-1 RNA for the first 4 years. A similar model, but with CD4 level as mediating variable, shows a weaker indirect effect, but the results are in the same direction. There are many reasons to be cautious when drawing conclusions from estimates of direct and indirect effects. Dynamic path analysis is however a useful tool to explore underlying processes, which are ignored in regular analyses. Copyright © 2011 John Wiley & Sons, Ltd.

    PMID: 21800346 [PubMed – as supplied by publisher]

    3. Arch Intern Med. 2011 Jul 25. [Epub ahead of print]

    Predicting Death: An Empirical Evaluation of Predictive Tools for Mortality. Siontis GC, Tzoulaki I, Ioannidis JP.University of Ioannina School of Medicine, Ioannina, Greece (Drs Siontis, Tzoulaki, and Ioannidis); Department of Epidemiology and Biostatistics, Imperial College of Medicine, London, England (Drs Tzoulaki and Ioannidis); the Institute for Clinical Research and Health Policy Studies, Department of Medicine, Tufts University School of Medicine, Boston, Massachusetts (Dr Ioannidis); the Department of Epidemiology, Harvard School of Public Health, Boston (Dr Ioannidis); and the Stanford Prevention Research Center, Stanford University School of Medicine, Stanford, California (Dr Ioannidis).

    BACKGROUND: The ability to predict death is crucial in medicine, and many relevant prognostic tools have been developed for application in diverse settings. We aimed to evaluate the discriminating performance of predictive tools for death and the variability in this performance across different clinical conditions and studies.

    METHODS: We used Medline to identify studies published in 2009 that assessed the accuracy (based on the area under the receiver operating characteristic curve [AUC]) of validated tools for predicting all-cause mortality. For tools where accuracy was reported in 4 or more assessments, we calculated summary accuracy measures. Characteristics of studies of the predictive tools were evaluated to determine if they were associated with the reported accuracy of the tool.

    RESULTS: A total of 94 eligible studies provided data on 240 assessments of 118 predictive tools. The AUC ranged from 0.43 to 0.98 (median [interquartile range], 0.77 [0.71-0.83]), with only 23 of the assessments reporting excellent discrimination (10%) (AUC, >0.90). For 10 tools, accuracy was reported in 4 or more assessments; only 1 tool had a summary AUC exceeding 0.80. Established tools showed large heterogeneity in their performance across different cohorts (I(2) range, 68%-95%). Reported AUC was higher for tools published in journals with lower impact factor (P = .01), with larger sample size (P = .01), and for those that aimed to predict mortality among the highest-risk patients (P = .002) and among children (P < .001).

    CONCLUSIONS: Most tools designed to predict mortality have only modest accuracy, and there is large variability across various diseases and populations. Most proposed tools do not have documented clinical utility.

    PMID: 21788535 [PubMed – as supplied by publisher]

    4. Am J Epidemiol. 2011 Jul 16. [Epub ahead of print]

    Reducing the Variance of the Prescribing Preference-based Instrumental Variable Estimates of the Treatment Effect. Abrahamowicz M, Beauchamp ME, Ionescu-Ittu R, Delaney JA, Pilote L.

    Instrumental variable (IV) methods based on the physician’s prescribing preference may remove bias due to unobserved confounding in pharmacoepidemiologic studies. However, IV estimates, originally defined as the treatment prescribed for a single previous patient of a given physician, show important variance inflation. The authors proposed and validated in simulations a new method to reduce the variance of IV estimates even when physicians’ preferences change over time. First, a potential “change-time,” after which the physician’s preference has changed, was estimated for each physician. Next, all patients of a given physician were divided into 2 homogeneous subsets: those treated before the change-time versus those treated after the change-time. The new IV was defined as the proportion of all previous patients in a corresponding homogeneous subset who were prescribed a specific drug. In simulations, all alternative IV estimators avoided strong bias of the conventional estimates. The change-time method reduced the standard deviation of the estimates by approximately 30% relative to the original previous patient-based IV. In an empirical example, the proposed IV correlated better with the actual treatment and yielded smaller standard errors than alternative IV estimators. Therefore, the new method improved the overall accuracy of IV estimates in studies with unobserved confounding and time-varying prescribing preferences.

    PMID: 21765169 [PubMed – as supplied by publisher]

    5. Am J Epidemiol. 2011 Jul 12. [Epub ahead of print]

    Performance of Disease Risk Scores, Propensity Scores, and Traditional Multivariable Outcome Regression in the Presence of Multiple Confounders. Arbogast PG, Ray WA.

    Propensity scores are widely used in cohort studies to improve performance of regression models when considering large numbers of covariates. Another type of summary score, the disease risk score (DRS), which estimates disease probability conditional on nonexposure, has also been suggested. However, little is known about how it compares with propensity scores. Monte Carlo simulations were conducted comparing regression models using the DRS and the propensity score with models that directly adjust for all of the individual covariates. The DRS was calculated in 2 ways: from the unexposed population and from the full cohort. Compared with traditional multivariable outcome regression models, all 3 summary scores had comparable performance for moderate correlation between exposure and covariates and, for strong correlation, the full-cohort DRS and propensity score had comparable performance. When traditional methods had model misspecification, propensity scores and the full-cohort DRS had superior performance. All 4 models were affected by the number of events per covariate, with propensity scores and traditional multivariable outcome regression least affected. These data suggest that, for cohort studies for which covariates are not highly correlated with exposure, the DRS, particularly that calculated from the full cohort, is a useful tool.

    PMID: 21749976 [PubMed – as supplied by publisher]

    6. Epidemiology. 2011 Jul 8. [Epub ahead of print]

    A Comparison of Methods to Estimate the Hazard Ratio Under Conditions of Time-varying Confounding and Nonpositivity. Naimi AI, Cole SR, Westreich DJ, Richardson DB. Department of Epidemiology, Gillings School of Global Public Health, UNC-Chapel Hill, NC and Department of Obstetrics and Gynecology and Duke Global Health Institute, Duke University.

    In occupational epidemiologic studies, the healthy worker survivor effect refers to a process that leads to bias in the estimates of an association between cumulative exposure and a health outcome. In these settings, work status acts both as an intermediate and confounding variable and may violate the positivity assumption (the presence of exposed and unexposed observations in all strata of the confounder). Using Monte Carlo simulation, we assessed the degree to which crude, work-status adjusted, and weighted (marginal structural) Cox proportional hazards models are biased in the presence of time-varying confounding and nonpositivity. We simulated the data representing time-varying occupational exposure, work status, and mortality. Bias, coverage, and root mean squared error (MSE) were calculated relative to the true marginal exposure effect in a range of scenarios. For a base-case scenario, using crude, adjusted, and weighted Cox models, respectively, the hazard ratio was biased downward 19%, 9%, and 6%; 95% confidence interval coverage was 48%, 85%, and 91%; and root MSE was 0.20, 0.13, and 0.11. Although marginal structural models were less biased in most scenarios studied, neither standard nor marginal structural Cox proportional hazards models fully resolve the bias encountered under conditions of time-varying confounding and nonpositivity.

    PMID: 21747286 [PubMed – as supplied by publisher]

CER Scan [published within the last 30 days]
     

     

    1. BMC Health Serv Res. 2011 Jul 21;11(1):171. [Epub ahead of print]

    Does adding risk-trends to survival models improve in-hospital mortality predictions? A cohort study. Wong J, Taljaard M, Forster AJ, van Walraven C.

    BACKGROUND: Clinicians informally assess changes in patients’ status over time to prognosticate their outcomes. The incorporation of trends in patient status into regression models could improve their ability to predict outcomes. In this study, we used a unique approach to measure trends in patient hospital death risk and determined whether the incorporation of these trend measures into a survival model improved the accuracy of its risk predictions.

    METHODS: We included all adult inpatient hospitalizations between 1 April 2004 and 31 March 2009 at our institution. We used the daily mortality risk scores from an existing time-dependent survival model to create five trend indicators: absolute and relative percent change in the risk score from the previous day; absolute and relative percent change in the risk score from the start of the trend; and number of days with a trend in the risk score. In the derivation set, we determined which trend indicators were associated with time to death in hospital, independent of the existing covariates. In the validation set, we compared the predictive performance of the existing model with and without the trend indicators.

    RESULTS: Three trend indicators were independently associated with time to hospital mortality: the absolute change in the risk score from the previous day; the absolute change in the risk score from the start of the trend; and the number of consecutive days with a trend in the risk score. However, adding these trend indicators to the existing model resulted in only small improvements in model discrimination and calibration.

    CONCLUSIONS: We produced several indicators of trend in patient risk that were significantly associated with time to hospital death independent of the model used to create them. In other survival models, our approach of incorporating risk trends could be explored to improve their performance without the collection of additional data.

    PMID: 21777460 [PubMed – as supplied by publisher]

    Open Access: http://www.biomedcentral.com/content/pdf/1472-6963-11-171.pdf

    2. Stat Med. 2011 Jul 20;30(16):1917-32. doi: 10.1002/sim.4262. Epub 2011 May 3.

    Alternative methods for testing treatment effects on the basis of multiple outcomes: Simulation and case study. Yoon FB, Fitzmaurice GM, Lipsitz SR, Horton NJ, Laird NM, Normand SL. Harvard Medical School, Boston, MA, U.S.A.. yoon@hcp.med.harvard.edu.

    In clinical trials multiple outcomes are often used to assess treatment interventions. This paper presents an evaluation of likelihood-based methods for jointly testing treatment effects in clinical trials with multiple continuous outcomes. Specifically, we compare the power of joint tests of treatment effects obtained from joint models for the multiple outcomes with univariate tests based on modeling the outcomes separately. We also consider the power and bias of tests when data are missing, a common feature of many trials, especially in psychiatry. Our results suggest that joint tests capitalize on the correlation of multiple outcomes and are more powerful than standard univariate methods, especially when outcomes are missing completely at random. When outcomes are missing at random, test procedures based on correctly specified joint models are unbiased, while standard univariate procedures are not. Results of a simulation study are reported, and the methods are illustrated in an example from the Clinical Antipsychotic Trials of Intervention Effectiveness for schizophrenia. Copyright © 2011 John Wiley & Sons, Ltd.

    PMCID: PMC3116112 [Available on 2012/7/20]

    PMID: 21538986 [PubMed – in process]

    3. Health Serv Res. 2011 Aug;46(4):1259-80. doi: 10.1111/j.1475-6773.2011.01253.x. Epub 2011 Mar 17.

    Crowd-out and Exposure Effects of Physical Comorbidities on Mental Health Care Use: Implications for Racial-Ethnic Disparities in Access. Lê Cook B, McGuire TG, Alegría M, Normand SL. Center for Multicultural Mental Health Research, 120 Beacon St., 4th Floor, Somerville, MA 02143 Department of Psychiatry, Harvard Medical School, Boston, MA Department of Health Care Policy, Harvard Medical School, Boston, MA Center for Multicultural Mental Health Research, Somerville, MA.

    Objectives. In disparities models, researchers adjust for differences in “clinical need,” including indicators of comorbidities. We reconsider this practice, assessing (1) if and how having a comorbidity changes the likelihood of recognition and treatment of mental illness; and (2) differences in mental health care disparities estimates with and without adjustment for comorbidities. Data. Longitudinal data from 2000 to 2007 Medical Expenditure Panel Survey (n=11,083) split into pre and postperiods for white, Latino, and black adults with probable need for mental health care. Study Design. First, we tested a crowd-out effect (comorbidities decrease initiation of mental health care after a primary care provider [PCP] visit) using logistic regression models and an exposure effect (comorbidities cause more PCP visits, increasing initiation of mental health care) using instrumental variable methods. Second, we assessed the impact of adjustment for comorbidities on disparity estimates. Principal Findings. We found no evidence of a crowd-out effect but strong evidence for an exposure effect. Number of postperiod visits positively predicted initiation of mental health care. Adjusting for racial/ethnic differences in comorbidities increased black-white disparities and decreased Latino-white disparities. Conclusions. Positive exposure findings suggest that intensive follow-up programs shown to reduce disparities in chronic-care management may have additional indirect effects on reducing mental health care disparities.

    PMCID: PMC3130831 [Available on 2012/8/1]

    PMID: 21413984 [PubMed – in process]

Theme: CER Education

     

     

    1. Pharmacoepidemiol Drug Saf. 2011 Aug;20(8):797-804. doi: 10.1002/pds.2100. Epub 2011 Jan 10.

    Curricular considerations for pharmaceutical comparative effectiveness research. Murray MD. Purdue University College of Pharmacy and Regenstrief Institute, Indianapolis, USA. mmurray@regenstrief.org.

    In the U.S. pharmacoepidemiology and related health professions can potentially flourish with the congressional appropriation of $1.1 billion of federal funding for comparative effectiveness research (CER). A direct result of this legislation will be the need for sufficient numbers of trained scientists and decision-makers to address the research and implementation associated with CER. An interdisciplinary expert panel comprised mostly of professionals with pharmaceutical interests was convened to examine the knowledge, skills, and abilities to be considered in the development of a CER curriculum for the health professions focusing predominantly on pharmaceuticals. A limitation of the panel’s composition was that it did not represent the breadth of comparative effectiveness research, which additionally includes devices, services, diagnostics, behavioral treatments, and delivery system changes. This bias affects the generalizability of these findings. Notwithstanding, important components of the curriculum identified by the panel included study design considerations and understanding the strengths and limitations of data sources. Important skills and abilities included methods for adjustment of differences in comparator group characteristics to control confounding and bias, data management skills, and clinical skills and insights into the relevance of comparisons. Most of the knowledge, skills, and abilities identified by the panel were consistent with the training of pharmacoepidemiologists. While comparative effectiveness is broader than the pharmaceutical sciences, pharmacoepidemiologists have much to offer academic and professional CER training programs. As such, pharmacoepidemiologists should have a central role in curricular design and provision of the necessary training for needed comparative effectiveness researchers within the realm of pharmaceutical sciences. Copyright © 2011 John Wiley & Sons, Ltd.

    PMID: 21796716 [PubMed – in process]

    2. Pharmacoepidemiol Drug Saf. 2011 Aug;20(8):805-6. doi: 10.1002/pds.2122. Epub 2011 May 25.

    The central role of pharmacoepidemiology in comparative effectiveness research education: critical next steps. Selker HP. Tufts Clinical and Translational Science Institute, Tufts University, Boston, MA, USA; Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, MA, USA. hselker@tuftsmedicalcenter.org.

    PMID: 21618339 [PubMed – in process]

    3. Pharmacoepidemiol Drug Saf. 2011 Aug;20(8):807-9. doi: 10.1002/pds.2173. Epub 2011 Jun 17.

    Starting the conversation. Lawrence W. Center for Outcomes and Evidence, Agency for Healthcare Research and Quality, 540 Gaither Rd., Rockville, MD, 20850, USA. William.lawrence@ahrq.hhs.gov.

    PMID: 21681851 [PubMed – in process]

Share →