Different Meta-analysis Strategies Produce Different Findings

Estimates of treatment outcomes based on meta-analyses differ depending on the strategy of analysis used, statisticians report.

Meta-analyses of randomized clinical trials are generally considered to provide among the best evidence of efficacy of treatments. However, it is not always clear which trials should be included in a meta-analyses. Meta-analyses of all trials have been proposed to produce precise but biased estimates.

Agnes Dechartres (Centre de Recherche Epidemiologie et Statistique, INSERM) et al. compared treatment outcomes estimated by meta-analysis of all trials with those of several other strategies, reporting their findings in in the August 13 issue of JAMA.

They compared findings from the single most precise trial (a trial with the narrowest confidence interval) with those from meta-analyses restricted to the 25% of largest trials, limit meta-analysis (adjusted for small-study effect), and trials with low overall risk for bias. They study included 163 meta-analyses published from 2008 through 2010 in journals with high impact factors and from 2011 through 2013 in the Cochrane Database of Systematic Reviews; 92 (705 trials) had subjective outcomes and 71 (535 trials) had objective outcomes.

Dechartres et al. found that treatment outcome estimates differed depending on the analytic strategy used. Treatment outcomes were found to be larger when meta-analyses of all trials was used, compared to that when the single most precise trial, meta-analyses of the largest trials, or limit meta­analyses were used. The difference in treatment outcomes between these strategies was substantial in 51% of subjective outcomes and 39% of objective outcomes. The authors did not find any difference in treatment outcomes based on overall risk of bias.

Dechartres et al. conclude that their observed instability in findings can alter conclusions derived from this form of analysis, and that there is a need for systematic sensitivity analyses.

Dechartres et al. write that their findings do not indicate which strategy is the best, but raise important questions about meta-analyses and the need to re-evaluate certain principles of meta-analyses.

“We recommend that authors of meta-analyses systematically assess the robustness of their results by performing sensitivity analyses,” the authors wrote. They propose comparison of meta-analysis results with results from the single most precise trial, or meta-analysis of the largest trials, and careful interpretation of the meta-analysis result if these findings do not agree.

In an editorial that accompanies the article, Jesse A. Berlin (Johnson & Johnson; Titusville, NJ) and Robert M. Golub (Deputy Editor, JAMA) wrote that these findings reinforce concerns that journals and readers have about meta-analysis as a study design. They state that the findings deserve consideration not only in the planning of the studies but in the journal peer review and evaluation, and reinforce the need for careful study interpretation.

“Meta-analysis has the potential to be the best source of evidence to inform decision making. The underlying methods have become much more sophisticated in the last few decades,” say Berlin and Golub. They state that the readers must approach these studies, as with all other literature, as imperfect information that requires critical appraisal and assessment of applicability of the findings to individual patients.

Leave a Reply

Your email address will not be published. Required fields are marked *