Report

Development and validation of methods for assessing the quality of diagnostic accuracy studies

Authors: Whiting P, Rutjes AW, Dinnes J, Reitsma J, Bossuyt PM, Kleijnen J

Journal: Health Technology Assessment Volume: 8 Issue: 25

Publication date: June 2004

DOI: 10.3310/hta8250

Citation:

Whiting P, Rutjes AW, Dinnes J, Reitsma J, Bossuyt PM, Kleijnen J.Development and validation of methods for assessing the quality of diagnostic accuracy studies. Health Technol Assess 2004;8(25)


Journal issues* can be purchased by completing the form.


The cost of reports varies according to number of pages and postage address. The minimum cost for a copy sent to a UK address is £30.00. We will contact you on receipt of your completed form to advise you of actual cost. If you have any queries, please contact nihredit@southampton.ac.uk.


*We regret that unfortunately we are unable to supply bound print copies of Health Technology Assessment published before issue 12:31. However, PDFs are available to print from the "Downloads" tab of the issue page.

Responses

No responses have been published. If you would like to submit a response to this publication, please do so using the form below.

Comments submitted to the NIHR Journals Library are electronic letters to the editor. They enable our readers to debate issues raised in research reports published in the Journals Library. We aim to post within 2 working days all responses that contribute substantially to the topic investigated, as determined by the Editors.

Your name and affiliations will be published with your comment.

Once published, you will not have the right to remove or edit your response. The Editors may add, remove, or edit comments at their absolute discretion.

Post your response

Surname

Forename

Middle Initial

Occupation / Job title

Affiliation / Employer

Email

Address

Other authors

For example, if you are responding as a team or group. Please ensure you include full names and separate these using commas

Statement of competing interests

We believe that readers should be aware of any competing interests (conflicts of interest).

The International Committee of Medical Journal Editors (ICMJE) define competing interests as including: financial relationships with industry (for example through employment, consultancies, stock, ownership, honoraria, and expert testimony), either directly or through immediate family; personal relationships; academic competition; and intellectual passion.

If yes, provide details below:

Enter response title

Enter response message

Enter CAPTCHA

Security key

Regenerate security key

By submitting your response, you are stating that you agree to the terms & conditions

  • Abstract

Abstract

Objectives

To develop a quality assessment tool which will be used in systematic reviews to assess the quality of primary studies of diagnostic accuracy.

Data sources

Electronic databases including MEDLINE, EMBASE, BIOSIS and the methodological databases of both CRD and the Cochrane Collaboration.

Review methods

Three systematic reviews were conducted to provide an evidence base for the development of the quality assessment tool. A Delphi procedure was used to develop the quality assessment tool and the information provided by the reviews was incorporated into this. A panel of nine experts in the area of diagnostic accuracy studies took part in the Delphi procedure to agree on the items to be included in the tool. Panel members were also asked to provide feedback on various other items and whether they would like to see the development of additional topic and design specific items. The Delphi procedure produced the quality assessment tool, named the QUADAS tool, which consisted of 14 items. A background document was produced describing each item included in the tool and how each of the items should be scored.

Results

The reviews produced 28 possible items for inclusion in the quality assessment tool. It was found that the sources of bias supported by the most empirical evidence were variation by clinical and demographic subgroups, disease prevalence/severity, partial verification bias, clinical review bias and observer/instrument variation. There was also some evidence of bias for the effects of distorted selection of participants, absent or inappropriate reference standard, differential verification bias and review bias. The evidence for the effects of other sources of bias was insufficient to draw conclusions. The third review found that only one item, the avoidance of review bias, was included in more than 75% of tools. Spectrum composition, population recruitment, absent or inappropriate reference standard and verification bias were each included in 50-75% of tools. Other items were included in less than 50% of tools. The second review found that the quality assessment tool should have the potential to be discussed narratively, reported in a tabular summary, used as recommendations for future research, used to conduct sensitivity or regression analyses and used as criteria for inclusion in the review or a primary analysis. This suggested that some distinction is needed between high- and low-quality studies. Component analysis was considered the best approach to incorporate quality into systematic reviews of diagnostic studies and this was taken into consideration when developing the tool.

Conclusions

This project produced an evidence-based quality assessment tool to be used in systematic reviews of diagnostic accuracy studies. Through the various stages of the project the current lack of such a tool and the need for a systematically developed validated tool were demonstrated. Further work to validate the tool continues beyond the scope of this project. The further development of the tool by the addition of design- and topic-specific criteria is proposed.

Publication updates

If you would like to receive information on publications and the latest news, click below to sign up.