- Bossuyt Patrick M., Reitsma Johannes B., Bruns David E., Gatsonis Constantine A., Glasziou Paul P., Irwig Les, Lijmer Jeroen G., Moher David, Rennie Drummond, de Vet Henrica C. W., Kressel Herbert Y., Rifai Nader, Golub Robert M., Altman Douglas G., Hooft Lotty, Korevaar Daniël A., Cohen Jérémie F., Alonzo Todd, Azuara-Blanco Augusto, Bachmann Lucas, Blume Jeffrey, Boutron Isabelle, Bruns David, Büller Harry, Buntinx Frank, Byron Sarah, Chang Stephanie, Cooper Richelle, de Groot Joris, Deeks Jon, Dendukuri Nandini, Dinnes Jac, Fleming Kenneth, Guyatt Gordon, Heneghan Carl, Hilden Jørgen, Horvath Rita, Hunink Myriam, Hyde Chris, Ioannidis John, Janes Holly, Kleijnen Jos, Knottnerus André, Korevaar A., Lange Stefan, Leeflang Mariska, Lord Sally, Lumbreras Blanca, Macaskill Petra, Magid Erik STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies BMJ (Clinical research ed.) 2015;351:h5527 [PubMed]
- van Es Nick, van der Hulle Tom, van Es Josien, den Exter Paul L., Douma Renée A., Goekoop Robbert J., Mos Inge C. M., Galipienzo Javier, Kamphuisen Pieter W., Huisman Menno V., Klok Frederikus A., Büller Harry R., Bossuyt Patrick M. Wells Rule and d-Dimer Testing to Rule Out Pulmonary Embolism: A Systematic Review and Individual-Patient Data Meta-analysis Annals of internal medicine 2016;165 (4):253-261 [PubMed]
- Bossuyt Patrick M. M., Reitsma Johannes B., Linnet Kristian, Moons Karel G. M. Beyond Diagnostic Accuracy: The Clinical Utility of Diagnostic Tests Clinical chemistry 2012;58 (12):1636-1643 [PubMed]
- Cohen Jérémie F., Korevaar Daniël A., Gatsonis Constantine A., Glasziou Paul P., Hooft Lotty, Moher David, Reitsma Johannes B., de Vet Henrica Cw, Bossuyt Patrick M. STARD for Abstracts: essential items for reporting diagnostic accuracy studies in journal or conference abstracts BMJ (Clinical research ed.) 2017;358:j3751 [PubMed]
- Ioannidis John P. A., Bossuyt Patrick M. M. Waste, Leaks, and Failures in the Biomarker Pipeline Clinical chemistry 2017;63 (5):963-972 [PubMed]
Patrick M. Bossuyt is the professor of Clinical Epidemiology at the University of Amsterdam. For 10 years he chaired the department of Clinical Epidemiology, Biostatistics and Bioinformatics at that University. He now acts as Head of the Division of Public Health and Clinical Methodology in the Academic Medical Center of the University of Amsterdam.
Patrick Bossuyt obtained a PhD from the University of Nijmegen (Netherlands) in 1990. Before his present appointment Bossuyt was at the Center for Clinical Epidemiology and Decision Making of the Erasmus University in Rotterdam. In 1991 Bossuyt joined the then newly founded Department of Clinical Epidemiology and Biostatistics in the AMC.
His scientific work mirrors that of the department; it covers a broad range of topics and specialty fields in clinical research, and contributing to evidence-based medicine through sound research on the effectiveness of healthcare interventions.
Dr Bossuyt leads the Biomarker and Test Evaluation Research program. He spearheaded the STARD initiative for the improved reporting of diagnostic test accuracy studies and has acquired multiple grants on test and marker evaluation methods.
Dr Bossuyt has authored and co-authored more than 900 publications in peer reviewed journals and serves on the editorial board of a number of these, such as Radiology, Clinical Chemistry, Medical Decision Making, BMC Systematic Reviews and BMC Diagnostic and Prognostic Research. He has supervised more than 85 successfully defended PhD theses. Dr Bossuyt chairs the Scientific Advisory Board of the Healthcare Institute and is a member of several national and international advisory committees.
Prof. P.M.M. Bossuyt PhD (BIomarker and Test Evaluation program (BITE))
Who are we?
The Biomarker and Test Evaluation Program is an ongoing research program in the Academic Medical Center, the University Hospital of the University of Amsterdam, the Netherlands.
What is our mission?
The BiTE program wants to appraise and develop methods for evaluating medical tests and biomarkers, and to apply these methods in relevant clinical studies. In doing so, we wish to strengthen the evidence-base for rational decision-making about the use of tests and test strategies in health care.
What do we want to achieve?
The BiTE program wants to become of one of the leading scientific groups in this field. The BiTE program hopes that this position will become evident from the number of frequently cited papers in high impact journals, citations in guidance papers for researchers and decision-makers, from invitations to contribute to scientific conferences and to the work of organizations in this area, and from other contributions to society.
What is Medical Test Evaluation?
Modern day medicine and health care cannot operate without the use of medical tests and markers. These are procedures and techniques for acquiring additional information about a patient’s present condition or the likely course of that condition in the future. These tests and markers are used to make a diagnosis, to identify the likely cause of the patient’s complaints, to stage disease, to establish a prognosis, to select therapy, to evaluate the effects of therapy, to monitor for side-effects and adjust dosing if necessary, for surveillance after therapy, and for many other reasons.
Scientific progress in biomedical sciences has improved our understanding of the origins of disease and factors responsible for its development. In translational medicine, findings from basic research lead to the identification of putative new biomarkers that could be of help in the management of patients and healthy individuals. A growing awareness of the scarcity of health care resources has led to an increased scrutiny of existing tests and markers and of the indications of their use.
The development of new laboratory tests, imaging modalities, genetic markers or other medical tests goes through several stages, in which new discoveries are carefully evaluated. In current evaluations, the focus lies predominantly on technical features and on analytical issues. The justification for the use of medical tests lies in their relevance for practice: biomarkers and other tests should improve the health of patients, relative to no testing or using other forms of testing, or they should lead to improvements in health care efficiency without compromising health outcome. In the end, the main dimension in clinical and health policy decisions is the clinical utility of medical tests.
What have we achieved so far?
Existing methods from (clinical) epidemiology for the evaluation of tests and markers were not very well developed in the 1980s.1 The dominant study design was and still is the diagnostic accuracy study, a type of research that evaluates how well the results of a test correspond to those of the clinical reference standard, the best available method for establishing the presence of a particular disease – or condition – in patients.2 Relevant but incomplete studies had been done about the sources of bias in these designs, and how the results of such studies could be used for medical decision making. There was limited empirical research, little work on how to synthesize the results of studies, and almost no methodology for evaluating tests used for purposes other than making a diagnosis. Our group has evaluated and extended the existing methods for test evaluations, starting from diagnostic tests. Below we summarize the main results of our findings.
Sources of bias and variability in diagnostic accuracy studies
In the 1980s and 1990s, it was quite common to read that diagnostic accuracy was a fixed property of a test, and that a single and simple diagnostic accuracy, in which the results of the test were compared with the outcome of the reference standard, was all that was needed to obtain accuracy estimates. Even now, this idea is still very much prevalent in epidemiology textbooks.
In our research, we soon discovered that this thesis was not at all tenable. Diagnostic accuracy is far from fixed. Genuine sources of its variability can be gender, age and other patient characteristics, as well as setting selection of patients based on previous testing. We systematically explored sources of variability as well as sources of bias. Our most influential paper is one in which we reported on meta-regression across a number of systematic reviews of test accuracy studies.3 Each review in this analysis had studies with and without shortcomings, and we estimated the magnitude of the average bias of these methodological limitations across the series of systematic reviews. The largest source of bias was the use of healthy controls and the application of multiple reference standards to verify the results of the index tests. The findings of this study were later replicated by our own group, using more refined methodology and a larger database.4 In later studies, we have examined alternative, more efficient ways of staging accuracy studies.5
Reporting of diagnostic accuracy studies
In our work, we also discovered that methodological features were often not or incompletely reported in diagnostic accuracy studies, making it difficult for reviewers and readers to make an appraisal of the validity of the study. Building on the very successful CONSORT initiative to improve the reporting of randomized clinical trials, we started an international project to develop recommendations for complete and transparent reporting of diagnostic accuracy studies.6 We spearheaded this STARD initiative to develop standards for the reporting of such studies. The STARD statement that came out of this initiative was initially published in the January issue of 2003 of more dozen journals. It has been published since then in double this number of journals, and has been adopted by more than 120 others (see www.stard-statement.org). We also evaluated the dissemination of the statement, and observed a slow increase in more transparent reporting.
Quality Appraisal of Test Accuracy Studies
Our work, complemented by additional reviews of the methodological literature, led to the QUADAS initiative, authored by one of our PhD students. QUADAS is a generic tool, specifically developed to appraise the methodological quality of test accuracy studies. This instrument is now undergoing revision by an international group, and members of our team act in the steering group for QUADAS 2.0.
Systematic Reviews of Test Accuracy Studies
Systematic reviews and meta-analysis of clinical studies can be used to obtain more precise estimates when several small studies addressing the same test in the same setting are available. Such reviews can also be useful to establish whether and how accuracy might vary across particular subgroups, and may provide summary estimates with a stronger generalizability than estimates from a single study.
Methods for systematic review and meta-analysis for randomized clinical trials had been available for some time, but similar methods for test accuracy studies were not immediately available. With our colleagues in Birmingham, Providence, Sydney and other places, we develop methods for each of the stages of the systematic review, including comprehensive searches, quality appraisal, evaluating bias in reviews, and meta-analysis.8 The bivariate normal model for meta-analysis, developed by Reitsma, Zwinderman and other colleagues, is now – with the hierarchical summary ROC method – one of the de facto standards for meta-analysis of test accuracy studies.9
The results of our research have been and are being used in the development of a manual for systematic reviews of test accuracy studies to be used in reviews for the Cochrane Collaboration, the largest and most influential international organization preparing, maintaining, and promoting systematic reviews to help people make well-informed decisions about health care.
Other forms of test evaluation
Although diagnostic accuracy studies have been the dominant theme in our research so far, we have also studies – and published on – other forms of test evaluation. These include the validity of randomized clinical trials, methods for developing monitoring schemes, participation in population screening, and patient outcomes in test evaluation.
Who do we collaborate with?
Within the Department of Clinical Epidemiology, Biostatistics and Bioinformatics of the Academic Medical Center, we collaborate with the Biostatistics Unit (Koos Zwinderman, Ronald Geskus) and the Dutch Cochrane Centre (Rob Scholten, Lotty Hooft).
Within the Academic Medical Center of the University of Amsterdam, we publish and share grants with colleagues from the following departments: Obstetrics and Gynecology, Fertility, Surgery, Gastroenterology, Internal Medicine, Radiology and others.
Within the Netherlands, we publish and share grants with the Julius Center at the University of Utrecht (Carl Moons), and the Department of Public Health of the Erasmus University of Rotterdam (Ewout Steyerberg).
In 2008-2010, we have published and shared grants with the following international colleagues :
• Screening and Test Evaluation Program, Department of Public Health, University of Sydney, Australia
• Unit of Public Health, Epidemiology and Biostatistics, University of Birmingham, United Kingdom
• Fred Hutchinson Cancer Research Center, Seattle, WA, USA
• Health Sciences Center, McMaster University, Hamilton, Ontario, Canada
• Harvard Medical School, Boston, MA, USA
• Durham Veterans Affairs Medical Center and Duke University, Durham, USA
• Center for Statistical Sciences, Brown University, Providence, RI, USA
We share ongoing competitive grants with researchers in the following international partner institutions:
• Screening and Test Evaluation Program, Department of Public Health, University of Sydney, Australia
• World Health Organization, Geneva, Switzerland
• University Of Dundee, United Kingdom
• Nasjonalt Kunnskapssenter For Helsetjenesten, Norway
• Fundacio Privada Institut De Recerca De L'hospital De La Santa CreuI, Sant Pau, Spain
• Associazione Per La Ricerca Sulla Efficacia Della Assistenza Sanitaria, Centro Cochrane Italiano, Italy
• Universitaetsklinikum Freiburg, Germany
• National Institute For Health And Clinical Excellence, United Kingdom
• NHS Quality Improvement Scotland Sign, United Kingdom
• Kustannus Oy Duodecim , Finland
P. Tajik MD PhD
- Europese Unie