Evidence of bias and variation in diagnostic accuracy studies.

TitleEvidence of bias and variation in diagnostic accuracy studies.
Publication TypeJournal Article
Year of Publication2006
AuthorsRutjes AWS, Reitsma JB, Di Nisio M, Smidt N, van Rijn JC, Bossuyt PMM
JournalCMAJ : Canadian Medical Association journal = journal de l'Association medicale canadienne
Date Published2006 Feb 14
KeywordsClinical Trials as Topic; Diagnosis, Differential; Diagnostic Techniques and Procedures; Humans; Meta-Analysis as Topic; Reproducibility of Results; Research Design

BACKGROUND: Studies with methodologic shortcomings can overestimate the accuracy of a medical test. We sought to determine and compare the direction and magnitude of the effects of a number of potential sources of bias and variation in studies on estimates of diagnostic accuracy.

METHODS: We identified meta-analyses of the diagnostic accuracy of tests through an electronic search of the databases MEDLINE, EMBASE, DARE and MEDION (1999-2002). We included meta-analyses with at least 10 primary studies without preselection based on design features. Pairs of reviewers independently extracted study characteristics and original data from the primary studies. We used a multivariable meta-epidemiologic regression model to investigate the direction and strength of the association between 15 study features on estimates of diagnostic accuracy.

RESULTS: We selected 31 meta-analyses with 487 primary studies of test evaluations. Only 1 study had no design deficiencies. The quality of reporting was poor in most of the studies. We found significantly higher estimates of diagnostic accuracy in studies with nonconsecutive inclusion of patients (relative diagnostic odds ratio [RDOR] 1.5, 95% confidence interval [CI] 1.0-2.1) and retrospective data collection (RDOR 1.6, 95% CI 1.1-2.2). The estimates were highest in studies that had severe cases and healthy controls (RDOR 4.9, 95% CI 0.6-37.3). Studies that selected patients based on whether they had been referred for the index test, rather than on clinical symptoms, produced significantly lower estimates of diagnostic accuracy (RDOR 0.5, 95% CI 0.3-0.9). The variance between meta-analyses of the effect of design features was large to moderate for type of design (cohort v. case-control), the use of composite reference standards and the use of differential verification; the variance was close to zero for the other design features.

INTERPRETATION: Shortcomings in study design can affect estimates of diagnostic accuracy, but the magnitude of the effect may vary from one situation to another. Design features and clinical characteristics of patient groups should be carefully considered by researchers when designing new studies and by readers when appraising the results of such studies. Unfortunately, incomplete reporting hampers the evaluation of potential sources of bias in diagnostic accuracy studies.

Alternate JournalCMAJ