The DEcIDE Methods Center publishes a monthly literature scan of current articles of interest to the field of comparative effectiveness research.

You can find them all here.

February 2012

CER Scan [Epub ahead of print]

  1. Am J Epidemiol. 2012 Jan 5. [Epub ahead of print]
  2. Bias in Observational Studies of Prevalent Users: Lessons for Comparative Effectiveness Research From a Meta-Analysis of Statins. Danaei G, Tavakkoli M, Hernán MA.

    Randomized clinical trials (RCTs) are usually the preferred strategy with which to generate evidence of comparative effectiveness, but conducting an RCT is not always feasible. Though observational studies and RCTs often provide comparable estimates, the questioning of observational analyses has recently intensified because of randomized-observational discrepancies regarding the effect of postmenopausal hormone replacement therapy on coronary heart disease. Reanalyses of observational data that excluded prevalent users of hormone replacement therapy led to attenuated discrepancies, which begs the question of whether exclusion of prevalent users should be generally recommended. In the current study, the authors evaluated the effect of excluding prevalent users of statins in a meta-analysis of observational studies of persons with cardiovascular disease. The pooled, multivariate-adjusted mortality hazard ratio for statin use was 0.77 (95% confidence interval (CI): 0.65, 0.91) in 4 studies that compared incident users with nonusers, 0.70 (95% CI: 0.64, 0.78) in 13 studies that compared a combination of prevalent and incident users with nonusers, and 0.54 (95% CI: 0.45, 0.66) in 13 studies that compared prevalent users with nonusers. The corresponding hazard ratio from 18 RCTs was 0.84 (95% CI: 0.77, 0.91). It appears that the greater the proportion of prevalent statin users in observational studies, the larger the discrepancy between observational and randomized estimates.
    PMID:22223710

CER Scan [published within the last 30 days]

  1. J Clin Epidemiol. 2012 Feb;65(2):132-7. Epub 2011 Aug 12.
  2. The "best balance" allocation led to optimal balance in cluster-controlled trials. de Hoop E, Teerenstra S, van Gaal BG, Moerbeek M, Borm GF. Department of Epidemiology, Biostatistics and HTA, 133, Radboud University Nijmegen Medical Centre, PO Box 9101, 6500 HB Nijmegen, The Netherlands.

    OBJECTIVE: Balance of prognostic factors between treatment groups is desirable because it improves the accuracy, precision, and credibility of the results. In cluster-controlled trials, imbalance can easily occur by chance when the number of cluster is small. If all clusters are known at the start of the study, the "best balance" allocation method (BB) can be used to obtain optimal balance. This method will be compared with other allocation methods.
    STUDY DESIGN AND SETTING: We carried out a simulation study to compare the balance obtained with BB, minimization, unrestricted randomization, and matching for four to 20 clusters and one to five categorical prognostic factors at cluster level.
    RESULTS: BB resulted in a better balance than randomization in 13-100% of the situations, in 0-61% for minimization, and in 0-88% for matching. The superior performance of BB increased as the number of clusters and/or the number of factors increased.
    CONCLUSION: BB results in a better balance of prognostic factors than randomization, minimization, stratification, and matching in most situations. Furthermore, BB cannot result in a worse balance of prognostic factors than the other methods. Copyright © 2012 Elsevier Inc. All rights reserved.
    PMID: 21840173 

  3. Clin Pharmacol Ther. 2012 Feb;91(2):165-7. doi: 10.1038/clpt.2011.208.
  4. Challenges in designing comparative-effectiveness trials for antidepressants. Leon AC. Departments of Psychiatry and Public Health, Weill Cornell Medical College, New York, New York, USA.

    Comparative-effectiveness antidepressant trials offer promise to provide empirical evidence for clinicians choosing among interventions. Whether such trials posit superiority or noninferiority (NI) hypotheses, they pose formidable challenges. For instance, if meaningful antidepressant differences are seen in comparative-superiority trials, they will be small. NI hypothesis testing, on the other hand, requires an a priori NI margin and evidence of trial assay sensitivity. Either design demands unusually large samples, which could render such trials infeasible.
    PMID: 22261683  [PubMed - in process]

FEBRUARY THEME: Selected Methods Manuscripts from the Pharmacoepidemiology and Drug Safety Mini-Sentinel Supplement

  1. The U.S. Food and Drug Administration’s Mini-Sentinel program: status and direction (pages 1–8). Platt R, Carnahan RM, Brown JS, Chrischilles E, Curtis LH, Hennessy S, Nelson JC, Racoosin JA, Robb M, Schneeweiss S, Toh S, Weiner MG. Article first published online: 19 JAN 2012 | DOI: 10.1002/pds.2343
  2. The Mini-Sentinel is a pilot program that is developing methods, tools, resources, policies, and procedures to facilitate the use of routinely collected electronic healthcare data to perform active surveillance of the safety of marketed medical products, including drugs, biologics, and medical devices. The U.S. Food and Drug Administration (FDA) initiated the program in 2009 as part of its Sentinel Initiative, in response to a Congressional mandate in the FDA Amendments Act of 2007. After two years, Mini-Sentinel includes 31 academic and private organizations. It has developed policies, procedures, and technical specifications for developing and operating a secure distributed data system comprised of separate data sets that conform to a common data model covering enrollment, demographics, encounters, diagnoses, procedures, and ambulatory dispensing of prescription drugs. The distributed data sets currently include administrative and claims data from 2000 to 2011 for over 300 million person-years, 2.4 billion encounters, 38 million inpatient hospitalizations, and 2.9 billion dispensings. Selected laboratory results and vital signs data recorded after 2005 are also available. There is an active data quality assessment and characterization program, and eligibility for medical care and pharmacy benefits is known. Systematic reviews of the literature have assessed the ability of administrative data to identify health outcomes of interest, and procedures have been developed and tested to obtain, abstract, and adjudicate full-text medical records to validate coded diagnoses. Mini-Sentinel has also created a taxonomy of study designs and analytical approaches for many commonly occurring situations, and it is developing new statistical and epidemiologic methods to address certain gaps in analytic capabilities. Assessments are performed by distributing computer programs that are executed locally by each data partner. The system is in active use by FDA, with the majority of assessments performed using customizable, reusable queries (programs). Prospective and retrospective assessments that use customized protocols are conducted as well. To date, several hundred unique programs have been distributed and executed. Current activities include active surveillance of several drugs and vaccines, expansion of the population, enhancement of the common data model to include additional types of data from electronic health records and registries, development of new methodologic capabilities, and assessment of methods to identify and validate additional health outcomes of interest. Copyright © 2012 John Wiley & Sons, Ltd.

    Link to Free PDF: http://onlinelibrary.wiley.com/doi/10.1002/pds.2343/pdf

  3. A policy framework for public health uses of electronic health data (pages 18–22). McGraw D, Rosati K, Evans B. Article first published online: 19 JAN 2012 | DOI: 10.1002/pds.2319
  4. Successful implementation of a program of active safety surveillance of drugs and medical products depends on public trust. This article summarizes how the initial pilot phase of the FDA’s Sentinel Initiative, Mini-Sentinel, is being conducted in compliance with applicable federal and state laws. The article also sets forth the attributes of Mini-Sentinel that enhance privacy and public trust, including the use of a distributed data system (where identifiable information remains at the data partners) and the adoption by participants of additional mandatory policies and procedures implementing fair information practices. The authors conclude by discussing the implications of this model for other types of secondary health data uses. Copyright © 2012 John Wiley & Sons, Ltd.

    Link to Free PDF: http://onlinelibrary.wiley.com/doi/10.1002/pds.2319/pdf

  5. Design considerations, architecture, and use of the Mini-Sentinel distributed data system (pages 23–31). Curtis LH,Weiner MG, Boudreau DM, Cooper WO, Daniel GW, Nair VP, Raebel MA, Beaulieu NU, Rosofsky R, Woodworth TS, Brown JS. Article first published online: 19 JAN 2012 | DOI: 10.1002/pds.2336
  6. Purpose: We describe the design, implementation, and use of a large, multiorganizational distributed database developed to support the Mini-Sentinel Pilot Program of the US Food and Drug Administration (FDA). As envisioned by the US FDA, this implementation will inform and facilitate the development of an active surveillance system for monitoring the safety of medical products (drugs, biologics, and devices) in the USA.
    Methods: A common data model was designed to address the priorities of the Mini-Sentinel Pilot and to leverage the experience and data of participating organizations and data partners. A review of existing common data models informed the process. Each participating organization designed a process to extract, transform, and load its source data, applying the common data model to create the Mini-Sentinel Distributed Database. Transformed data were characterized and evaluated using a series of programs developed centrally and executed locally by participating organizations. A secure communications portal was designed to facilitate queries of the Mini-Sentinel Distributed Database and transfer of confidential data, analytic tools were developed to facilitate rapid response to common questions, and distributed querying software was implemented to facilitate rapid querying of summary data.
    Results: As of July 2011, information on 99 260 976 health plan members was included in the Mini-Sentinel Distributed Database. The database includes 316 009 067 person-years of observation time, with members contributing, on average, 27.0 months of observation time. All data partners have successfully executed distributed code and returned findings to the Mini-Sentinel Operations Center.
    Conclusion: This work demonstrates the feasibility of building a large, multiorganizational distributed data system in which organizations retain possession of their data that are used in an active surveillance system. Copyright © 2012 John Wiley & Sons, Ltd.

    Link to Free PDF: http://onlinelibrary.wiley.com/doi/10.1002/pds.2336/pdf

  7. Using high-dimensional propensity scores to automate confounding control in a distributed medical product safety surveillance system (pages 41–49). Rassen JA, Schneeweiss S. Article first published online: 19 JAN 2012 | DOI: 10.1002/pds.2328
  8. Distributed medical product safety monitoring systems such as the Sentinel System, to be developed as a part of Food and Drug Administration’s Sentinel Initiative, will require automation of large parts of the safety evaluation process to achieve the necessary speed and scale at reasonable cost without sacrificing validity. Although certain functions will require investigator intervention, confounding control is one area that can largely be automated. The high-dimensional propensity score (hd-PS) algorithm is one option for automated confounding control in longitudinal healthcare databases. In this article, we discuss the use of hd-PS for automating confounding control in sequential database cohort studies, as applied to safety monitoring systems. In particular, we discuss the robustness of the covariate selection process, the potential for over- or under-selection of variables including the possibilities of M-bias and Z-bias, the computation requirements, the practical considerations in a federated database network, and the cases where automated confounding adjustment may not function optimally. We also outline recent improvements to the algorithm and show how the algorithm has performed in several published studies. We conclude that despite certain limitations, hd-PS offers substantial advantages over non-automated alternatives in active product safety monitoring systems. Copyright © 2012 John Wiley & Sons, Ltd.

    Link to Free PDF: http://onlinelibrary.wiley.com/doi/10.1002/pds.2328/pdf

  9. When should case-only designs be used for safety monitoring of medical products? (pages 50–61). Maclure M, Fireman B, Nelson JC, Hua W, Shoaibi A, Paredes A, Madigan D. Article first published online: 19 JAN 2012 | DOI: 10.1002/pds.2330
  10. Purpose: To assess case-only designs for surveillance with administrative databases.
    Methods: We reviewed literature on two designs that are observational analogs to crossover experiments: the self-controlled case series (SCCS) and the case-crossover (CCO) design.
    Results: SCCS views the ‘experiment’ prospectively, comparing outcome risks in windows with different exposures. CCO retrospectively compares exposure frequencies in case and control windows. The main strength of case-only designs is they entail self-controlled analyses that eliminate confounding and selection bias by time-invariant characteristics not recorded in healthcare databases. They also protect privacy and are computationally efficient, as they require fewer subjects and variables. They are better than cohort designs for investigating transient effects of accurately recorded preventive agents, for example, vaccines. They are problematic if timing of self-administration is sporadic and dissociated from dispensing times, for example, analgesics. They tend to have less exposure misclassification bias and time-varying confounding if exposures are brief. Standard SCCS designs are bidirectional (using time both before and after the first exposure event), so they are more susceptible than CCOs to reverse-causality bias, including immortal-time bias. This is true also for sequence symmetry analysis, a simplified SCCS. Unidirectional CCOs use only time before the outcome, so they are less affected by reverse causality but susceptible to exposure-trend bias. Modifications of SCCS and CCO partially deal with these biases. The head-to-head comparison of multiple products helps to control residual biases.
    Conclusion: The case-only analyses of intermittent users complement the cohort analyses of prolonged users because their different biases compensate for one another. Copyright © 2012 John Wiley & Sons, Ltd.

    Link to Free PDF: http://onlinelibrary.wiley.com/doi/10.1002/pds.2330/pdf

  11. Challenges in the design and analysis of sequentially monitored postmarket safety surveillance evaluations using electronic observational health care data (pages 62–71). Nelson JC, Cook AJ, Yu O, Dominguez C, Zhao S, Greene SK, Fireman BH, Jacobsen SJ, Weintraub ES, Jackson LA. Article first published online: 19 JAN 2012 | DOI: 10.1002/pds.2324
  12. Purpose: Many challenges arise when conducting a sequentially monitored medical product safety surveillance evaluation using observational electronic data captured during routine care. We review existing sequential approaches for potential use in this setting, including a continuous sequential testing method that has been utilized within the Vaccine Safety Datalink (VSD) and group sequential methods, which are used widely in randomized clinical trials.
    Methods: Using both simulated data and preliminary data from an ongoing VSD evaluation, we discuss key sequential design considerations, including sample size and duration of surveillance, shape of the signaling threshold, and frequency of interim testing.
    Results and Conclusions: All designs control the overall Type 1 error rate across all tests performed, but each yields different tradeoffs between the probability and timing of true and false positive signals. Designs tailored to monitor efficacy outcomes in clinical trials have been well studied, but less consideration has been given to optimizing design choices for observational safety settings, where the hypotheses, population, prevalence and severity of the outcomes, implications of signaling, and costs of false positive and negative findings are very different. Analytic challenges include confounding, missing and partially accrued data, high misclassification rates for outcomes presumptively defined using diagnostic codes, and unpredictable changes in dynamically accessed data over time (e.g., differential product uptake). Many of these factors influence the variability of the adverse events under evaluation and, in turn, the probability of committing a Type 1 error. Thus, to ensure proper Type 1 error control, planned sequential thresholds should be adjusted over time to account for these issues. Copyright © 2012 John Wiley & Sons, Ltd.

    Link to Free PDF: http://onlinelibrary.wiley.com/doi/10.1002/pds.2324/pdf

  13. Statistical approaches to group sequential monitoring of postmarket safety surveillance data: current state of the art for use in the Mini-Sentinel pilot (pages 72–81). Cook AJ, Tiwari RC, Wellman RD, Heckbert SR, Li L, Heagerty P, Marsh T, Nelson JC. Article first published online: 19 JAN 2012 | DOI: 10.1002/pds.2320
  14. Purpose: This manuscript describes the current statistical methodology available for active postmarket surveillance of pre-specified safety outcomes using a prospective incident user concurrent control cohort design with existing electronic healthcare data.
    Methods: Motivation of the active postmarket surveillance setting is provided using the Food and Drug Administration’s Mini-Sentinel Pilot as an example. Four sequential monitoring statistical methods are presented including the Lan–Demets error spending approach, a matched likelihood ratio test statistic approach with the binomial MaxSPRT as a special case, the conditional sequential sampling procedure with stratification, and a generalized estimating equation regression approach using permutation. Information on the assumptions, limitations, and advantages of each approach is provided, including how each method defines sequential monitoring boundaries, what test statistic is used, and how robust it is to settings of rare events or frequent testing.
    Results: A hypothetical example of how the approaches could be applied to data comparing a medical product of interest, drug A, to a concurrent control drug, drug B, is presented including providing the type of information one would have available for monitoring such drugs. Copyright © 2012 John Wiley & Sons, Ltd.

    Link to Free PDF: http://onlinelibrary.wiley.com/doi/10.1002/pds.2320/pdf

  15. A protocol for active surveillance of acute myocardial infarction in association with the use of a new antidiabetic pharmaceutical agent (pages 282–290). Fireman B, Toh S, Butler MG, Go AS, Joffe HV, Graham DJ, Nelson JC, Daniel GW, Selby JV. Article first published online: 19 JAN 2012 | DOI: 10.1002/pds.2337
  16. Purpose: To describe a protocol for active surveillance of acute myocardial infarction (AMI) in users of a recently approved oral antidiabetic medication, saxagliptin, and to provide the rationale for decisions made in drafting the protocol.
    Methods: A new-user cohort design is planned for evaluating data from at least four Mini-Sentinel data partners from 1 August 2009 (following US Food and Drug Administration’s approval of saxagliptin) through mid-2013. New users of saxagliptin will be compared in separate analyses with new users of sitagliptin, pioglitazone, long-acting insulins, and second-generation sulfonylureas. Two approaches to controlling for confounding will be evaluated: matching by exposure propensity score and stratification by AMI risk score. The primary analyses will use Cox regression models specified in a way that does not require pooling of patient-level data from the data partners. The Cox models are fit to summarized data on risk sets composed of saxagliptin users and similar comparator users at the time of an AMI. Secondary analyses will use alternative methods including Poisson regression and will explore whether further adjustment for covariates available only at some data partners (e.g., blood pressure) modifies results.
    Results: The results of this study are pending.
    Conclusions: The proposed protocol describes a design for surveillance to evaluate the safety of a newly marketed agent as postmarket experience accrues. It uses data from multiple partner organizations without requiring sharing of patient-level data and compares alternative approaches to controlling for confounding. It is hoped that this initial active surveillance project of the Mini-Sentinel will provide insights that inform future population-based surveillance of medical product safety. Copyright © 2012 John Wiley & Sons, Ltd.

    Link to Free PDF: http://onlinelibrary.wiley.com/doi/10.1002/pds.2337/pdf

Share →