Development and validation of methods for assessing the quality of diagnostic accuracy studies
Authors: Whiting P, Rutjes AW, Dinnes J, Reitsma J, Bossuyt PM, Kleijnen J
Journal: Health Technology Assessment Volume: 8 Issue: 25
Publication date: June 2004
Development and validation of methods for assessing the quality of diagnostic accuracy studies. Health Technol Assess 2004;8(25)
Download: Citation (for this publication as a .ris file) (3.5 KB)
Journal issues* can be purchased by completing the form.
The cost of reports varies according to number of pages and postage address. The minimum cost for a copy sent to a UK address is £30.00. We will contact you on receipt of your completed form to advise you of actual cost. If you have any queries, please contact email@example.com.
*We regret that unfortunately we are unable to supply bound print copies of Health Technology Assessment published before issue 12:31. However, PDFs are available to print from the "Downloads" tab of the issue page.
To develop a quality assessment tool which will be used in systematic reviews to assess the quality of primary studies of diagnostic accuracy.
Electronic databases including MEDLINE, EMBASE, BIOSIS and the methodological databases of both CRD and the Cochrane Collaboration.
Three systematic reviews were conducted to provide an evidence base for the development of the quality assessment tool. A Delphi procedure was used to develop the quality assessment tool and the information provided by the reviews was incorporated into this. A panel of nine experts in the area of diagnostic accuracy studies took part in the Delphi procedure to agree on the items to be included in the tool. Panel members were also asked to provide feedback on various other items and whether they would like to see the development of additional topic and design specific items. The Delphi procedure produced the quality assessment tool, named the QUADAS tool, which consisted of 14 items. A background document was produced describing each item included in the tool and how each of the items should be scored.
The reviews produced 28 possible items for inclusion in the quality assessment tool. It was found that the sources of bias supported by the most empirical evidence were variation by clinical and demographic subgroups, disease prevalence/severity, partial verification bias, clinical review bias and observer/instrument variation. There was also some evidence of bias for the effects of distorted selection of participants, absent or inappropriate reference standard, differential verification bias and review bias. The evidence for the effects of other sources of bias was insufficient to draw conclusions. The third review found that only one item, the avoidance of review bias, was included in more than 75% of tools. Spectrum composition, population recruitment, absent or inappropriate reference standard and verification bias were each included in 50-75% of tools. Other items were included in less than 50% of tools. The second review found that the quality assessment tool should have the potential to be discussed narratively, reported in a tabular summary, used as recommendations for future research, used to conduct sensitivity or regression analyses and used as criteria for inclusion in the review or a primary analysis. This suggested that some distinction is needed between high- and low-quality studies. Component analysis was considered the best approach to incorporate quality into systematic reviews of diagnostic studies and this was taken into consideration when developing the tool.
This project produced an evidence-based quality assessment tool to be used in systematic reviews of diagnostic accuracy studies. Through the various stages of the project the current lack of such a tool and the need for a systematically developed validated tool were demonstrated. Further work to validate the tool continues beyond the scope of this project. The further development of the tool by the addition of design- and topic-specific criteria is proposed.