Journals Library

An error has occurred in processing the XML document

An error occurred retrieving content to display, please try again.

Page not found (404)

Sorry - the page you requested could not be found.

Please choose a page from the navigation or try a website search above to find the information you need.

{{metadata.Title}}

{{metadata.Headline}}

An error has occurred in processing the XML document

{{author}}{{author}}{{($index < metadata.AuthorsAndEtalArray.length-1) ? ',' : '.'}}

An error has occurred in processing the XML document

An error has occurred in processing the XML document

{{metadata.Journal}} Volume: {{metadata.Volume}}, Issue:{{metadata.Issue}}, Published in {{metadata.PublicationDate | date:'MMMM yyyy'}}

https://dx.doi.org/{{metadata.DOI}}

Citation: {{author}}{{ (($index < metadata.AuthorsArray.length-1) && ($index <=6)) ? ', ' : '' }}{{(metadata.AuthorsArray.length <= 6) ? '.' : '' }} {{(metadata.AuthorsArray.length > 6) ? 'et al.' : ''}} {{metadata.Title}}. {{metadata.JournalShortName}} {{metadata.PublicationDate | date:'yyyy'}};{{metadata.Volume}}({{metadata.Issue}})

You might also be interested in:
{{classification.Category.Concept}}

Report Content

The full text of this issue is available as a PDF document from the Toolkit section on this page.

The full text of this issue is available as a PDF document from the Toolkit section on this page.

Abstract

OBJECTIVES

To survey the frequency of use of indirect comparisons in systematic reviews and evaluate the methods used in their analysis and interpretation. Also to identify alternative statistical approaches for the analysis of indirect comparisons, to assess the properties of different statistical methods used for performing indirect comparisons and to compare direct and indirect estimates of the same effects within reviews.

DATA SOURCES

Electronic databases.

REVIEW METHODS

The Database of Abstracts of Reviews of Effects (DARE) was searched for systematic reviews involving meta-analysis of randomised controlled trials (RCTs) that reported both direct and indirect comparisons, or indirect comparisons alone. A systematic review of MEDLINE and other databases was carried out to identify published methods for analysing indirect comparisons. Study designs were created using data from the International Stroke Trial. Random samples of patients receiving aspirin, heparin or placebo in 16 centres were used to create meta-analyses, with half of the trials comparing aspirin and placebo and half heparin and placebo. Methods for indirect comparisons were used to estimate the contrast between aspirin and heparin. The whole process was repeated 1000 times and the results were compared with direct comparisons and also theoretical results. Further detailed case studies comparing the results from both direct and indirect comparisons of the same effects were undertaken.

RESULTS

Of the reviews identified through DARE, 31/327 (9.5%) included indirect comparisons. A further five reviews including indirect comparisons were identified through electronic searching. Few reviews carried out a formal analysis and some based analysis on the naive addition of data from the treatment arms of interest. Few methodological papers were identified. Some valid approaches for aggregate data that could be applied using standard software were found: the adjusted indirect comparison, meta-regression and, for binary data only, multiple logistic regression (fixed effect models only). Simulation studies showed that the naive method is liable to bias and also produces over-precise answers. Several methods provide correct answers if strong but unverifiable assumptions are fulfilled. Four times as many similarly sized trials are needed for the indirect approach to have the same power as directly randomised comparisons. Detailed case studies comparing direct and indirect comparisons of the same effect show considerable statistical discrepancies, but the direction of such discrepancy is unpredictable.

CONCLUSIONS

Direct evidence from good-quality RCTs should be used wherever possible. Without this evidence, it may be necessary to look for indirect comparisons from RCTs. However, the results may be susceptible to bias. When making indirect comparisons within a systematic review, an adjusted indirect comparison method should ideally be used employing the random effects model. If both direct and indirect comparisons are possible within a review, it is recommended that these be done separately before considering whether to pool data. There is a need to evaluate methods for the analysis of indirect comparisons for continuous data and for empirical research into how different methods of indirect comparison perform in cases where there is a large treatment effect. Further study is needed into when it is appropriate to look at indirect comparisons and when to combine both direct and indirect comparisons. Research into how evidence from indirect comparisons compares to that from non-randomised studies may also be warranted. Investigations using individual patient data from a meta-analysis of several RCTs using different protocols and an evaluation of the impact of choosing different binary effect measures for the inverse variance method would also be useful.

Abstract

OBJECTIVES

To survey the frequency of use of indirect comparisons in systematic reviews and evaluate the methods used in their analysis and interpretation. Also to identify alternative statistical approaches for the analysis of indirect comparisons, to assess the properties of different statistical methods used for performing indirect comparisons and to compare direct and indirect estimates of the same effects within reviews.

DATA SOURCES

Electronic databases.

REVIEW METHODS

The Database of Abstracts of Reviews of Effects (DARE) was searched for systematic reviews involving meta-analysis of randomised controlled trials (RCTs) that reported both direct and indirect comparisons, or indirect comparisons alone. A systematic review of MEDLINE and other databases was carried out to identify published methods for analysing indirect comparisons. Study designs were created using data from the International Stroke Trial. Random samples of patients receiving aspirin, heparin or placebo in 16 centres were used to create meta-analyses, with half of the trials comparing aspirin and placebo and half heparin and placebo. Methods for indirect comparisons were used to estimate the contrast between aspirin and heparin. The whole process was repeated 1000 times and the results were compared with direct comparisons and also theoretical results. Further detailed case studies comparing the results from both direct and indirect comparisons of the same effects were undertaken.

RESULTS

Of the reviews identified through DARE, 31/327 (9.5%) included indirect comparisons. A further five reviews including indirect comparisons were identified through electronic searching. Few reviews carried out a formal analysis and some based analysis on the naive addition of data from the treatment arms of interest. Few methodological papers were identified. Some valid approaches for aggregate data that could be applied using standard software were found: the adjusted indirect comparison, meta-regression and, for binary data only, multiple logistic regression (fixed effect models only). Simulation studies showed that the naive method is liable to bias and also produces over-precise answers. Several methods provide correct answers if strong but unverifiable assumptions are fulfilled. Four times as many similarly sized trials are needed for the indirect approach to have the same power as directly randomised comparisons. Detailed case studies comparing direct and indirect comparisons of the same effect show considerable statistical discrepancies, but the direction of such discrepancy is unpredictable.

CONCLUSIONS

Direct evidence from good-quality RCTs should be used wherever possible. Without this evidence, it may be necessary to look for indirect comparisons from RCTs. However, the results may be susceptible to bias. When making indirect comparisons within a systematic review, an adjusted indirect comparison method should ideally be used employing the random effects model. If both direct and indirect comparisons are possible within a review, it is recommended that these be done separately before considering whether to pool data. There is a need to evaluate methods for the analysis of indirect comparisons for continuous data and for empirical research into how different methods of indirect comparison perform in cases where there is a large treatment effect. Further study is needed into when it is appropriate to look at indirect comparisons and when to combine both direct and indirect comparisons. Research into how evidence from indirect comparisons compares to that from non-randomised studies may also be warranted. Investigations using individual patient data from a meta-analysis of several RCTs using different protocols and an evaluation of the impact of choosing different binary effect measures for the inverse variance method would also be useful.

If you would like to receive a notification when this project publishes in the NIHR Journals Library, please submit your email address below.

An error has occurred in processing the XML document

 

Responses to this report

 

No responses have been published.

If you would like to submit a response to this publication, please do so using the form below.

Comments submitted to the NIHR Journals Library are electronic letters to the editor. They enable our readers to debate issues raised in research reports published in the Journals Library. We aim to post within 2 working days all responses that contribute substantially to the topic investigated, as determined by the Editors.

Your name and affiliations will be published with your comment.

Once published, you will not have the right to remove or edit your response. The Editors may add, remove, or edit comments at their absolute discretion.

By submitting your response, you are stating that you agree to the terms & conditions