Journals Library

An error has occurred in processing the XML document

An error has occurred in processing the XML document

An error occurred retrieving content to display, please try again.

Page not found (404)

Sorry - the page you requested could not be found.

Please choose a page from the navigation or try a website search above to find the information you need.

{{author}}{{author}}{{($index < metadata.AuthorsAndEtalArray.length-1) ? ',' : '.'}}

An error has occurred in processing the XML document

An error has occurred in processing the XML document

Funding: {{metadata.Funding}}

{{metadata.Journal}} Volume: {{metadata.Volume}}, Issue: {{metadata.Issue}}, Published in {{metadata.PublicationDate | date:'MMMM yyyy'}}

https://doi.org/{{metadata.DOI}}

Citation: {{author}}{{ (($index < metadata.AuthorsArray.length-1) && ($index <=6)) ? ', ' : '' }}{{(metadata.AuthorsArray.length <= 6) ? '.' : '' }} {{(metadata.AuthorsArray.length > 6) ? 'et al.' : ''}} . {{metadata.JournalShortName}} {{metadata.PublicationDate | date:'yyyy'}};{{metadata.Volume}}({{metadata.Issue}})

Crossmark status check

Report Content

The full text of this issue is available as a PDF document from the Toolkit section on this page.

The full text of this issue is available as a PDF document from the Toolkit section on this page.

Abstract

BACKGROUND

There is controversy about the value of evidence about the effectiveness of healthcare interventions from non-randomised study designs. Advocates for quasi-experimental and observational (QEO) studies argue that evidence from randomised controlled trials (RCTs) is often difficult or impossible to obtain, or is inadequate to answer the question of interest. Advocates for RCTs point out that QEO studies are more susceptible to bias and refer to published comparisons that suggest QEO estimates tend to find a greater benefit than RCT estimates. However, comparisons from the literature are often cited selectively, may be unsystematic and may have failed to distinguish between different explanations for any discrepancies observed.

OBJECTIVES

The aim was to investigate the association between methodological quality and the magnitude of estimates of effectiveness by comparing systematically estimates of effectiveness derived from RCTs and QEO studies. Quantifying any such association should help healthcare decision-makers to judge the strength of evidence from non-randomised studies. Two strategies were used to minimise the influence of differences in external validity between RCTs and QEO studies: a comparison of the RCT and QEO study estimates of effectiveness of any intervention, where both estimates were reported in a single paper a comparison of the RCT and QEO study estimates of effectiveness for specified interventions, where the estimates were reported in different papers. The authors also sought to identify study designs that have been proposed to address one or more of the problems often found with conventional RCTs.

METHODS

DATA SOURCES

Relevant literature was identified from: The Cochrane Library, MEDLINE, EMBASE, DARE, and the Science Citation Index. References of relevant papers already identified experts. Electronic searches were very difficult to design and yielded few papers for the first strategy and when identifying study designs. CHOICE OF INTERVENTIONS TO REVIEW FOR STRATEGIES 1 AND 2: For strategy 1, any intervention was eligible. For strategy 2, interventions for which the population, intervention and outcome investigated were anticipated to be homogeneous across studies were selected for review: Mammographic screening (MSBC) of women to reduce mortality from breast cancer. Folic acid supplementation (FAS) to prevent neural tube defects in women trying to conceive. DATA EXTRACTION AND QUALITY ASSESSMENT: Data were extracted by the first author and checked by the second author. Disagreements were negotiated with reference to the paper concerned. For strategy 1, study quality was scored using a checklist to assess whether the RCT and QEO study estimates were derived from the same populations, whether the assessment of outcomes was 'blinded', and the extent to which the QEO study estimate took account of possible confounding. For strategy 2, a more detailed instrument was used to assess study quality on four dimensions: the quality of reporting, the generalisability of the results, and the extent to which estimates of effectiveness may have been subject to bias or confounding. All quality assessments were carried out by three people. DATA SYNTHESIS AND ANALYSIS: For strategy 1, pairs of comparisons between RCT and QEO study estimates were classified as high or low quality. Seven indices of the size of discrepancies between estimates of effect size and outcome frequency were calculated, where possible, for each comparison. Distributions of the size and direction of discrepancies were compared for high- and low-quality comparisons. FOR STRATEGY 2, THREE ANALYSES WERE CARRIED OUT: Attributes of the instrument were described by k statistics, percentage agreement, and Cronbach's a values. Regression analyses were used to investigate -variations in study quality. (ABSTRACT TRUNCATED)

Abstract

BACKGROUND

There is controversy about the value of evidence about the effectiveness of healthcare interventions from non-randomised study designs. Advocates for quasi-experimental and observational (QEO) studies argue that evidence from randomised controlled trials (RCTs) is often difficult or impossible to obtain, or is inadequate to answer the question of interest. Advocates for RCTs point out that QEO studies are more susceptible to bias and refer to published comparisons that suggest QEO estimates tend to find a greater benefit than RCT estimates. However, comparisons from the literature are often cited selectively, may be unsystematic and may have failed to distinguish between different explanations for any discrepancies observed.

OBJECTIVES

The aim was to investigate the association between methodological quality and the magnitude of estimates of effectiveness by comparing systematically estimates of effectiveness derived from RCTs and QEO studies. Quantifying any such association should help healthcare decision-makers to judge the strength of evidence from non-randomised studies. Two strategies were used to minimise the influence of differences in external validity between RCTs and QEO studies: a comparison of the RCT and QEO study estimates of effectiveness of any intervention, where both estimates were reported in a single paper a comparison of the RCT and QEO study estimates of effectiveness for specified interventions, where the estimates were reported in different papers. The authors also sought to identify study designs that have been proposed to address one or more of the problems often found with conventional RCTs.

METHODS

DATA SOURCES

Relevant literature was identified from: The Cochrane Library, MEDLINE, EMBASE, DARE, and the Science Citation Index. References of relevant papers already identified experts. Electronic searches were very difficult to design and yielded few papers for the first strategy and when identifying study designs. CHOICE OF INTERVENTIONS TO REVIEW FOR STRATEGIES 1 AND 2: For strategy 1, any intervention was eligible. For strategy 2, interventions for which the population, intervention and outcome investigated were anticipated to be homogeneous across studies were selected for review: Mammographic screening (MSBC) of women to reduce mortality from breast cancer. Folic acid supplementation (FAS) to prevent neural tube defects in women trying to conceive. DATA EXTRACTION AND QUALITY ASSESSMENT: Data were extracted by the first author and checked by the second author. Disagreements were negotiated with reference to the paper concerned. For strategy 1, study quality was scored using a checklist to assess whether the RCT and QEO study estimates were derived from the same populations, whether the assessment of outcomes was 'blinded', and the extent to which the QEO study estimate took account of possible confounding. For strategy 2, a more detailed instrument was used to assess study quality on four dimensions: the quality of reporting, the generalisability of the results, and the extent to which estimates of effectiveness may have been subject to bias or confounding. All quality assessments were carried out by three people. DATA SYNTHESIS AND ANALYSIS: For strategy 1, pairs of comparisons between RCT and QEO study estimates were classified as high or low quality. Seven indices of the size of discrepancies between estimates of effect size and outcome frequency were calculated, where possible, for each comparison. Distributions of the size and direction of discrepancies were compared for high- and low-quality comparisons. FOR STRATEGY 2, THREE ANALYSES WERE CARRIED OUT: Attributes of the instrument were described by k statistics, percentage agreement, and Cronbach's a values. Regression analyses were used to investigate -variations in study quality. (ABSTRACT TRUNCATED)

If you would like to receive a notification when this project publishes in the NIHR Journals Library, please submit your email address below.

 

Responses to this report

No responses have been published.

 

If you would like to submit a response to this publication, please do so using the form below:

Comments submitted to the NIHR Journals Library are electronic letters to the editor. They enable our readers to debate issues raised in research reports published in the Journals Library. We aim to post within 14 working days all responses that contribute substantially to the topic investigated, as determined by the Editors.  Non-relevant comments will be deleted.

Your name and affiliations will be published with your comment.

Once published, you will not have the right to remove or edit your response. The Editors may add, remove, or edit comments at their absolute discretion.

By submitting your response, you are stating that you agree to the terms & conditions