PROBAST: a risk of bias tool for prediction modelling studies


RO 1.7


Rapid oral session 1: Risk of bias assessment tools


Sunday 4 October 2015 - 11:00 to 12:30


All authors in correct order:

Wolff R1, Whiting P2, Mallett S3, Riley R4, Westwood M1, Kleijnen J5, Moons K6
1 Kleijnen Systematic Reviews Ltd, United Kingdom
2 University of Bristol, United Kingdom
3 University of Birmingham, United Kingdom
4 University of Keele, United Kingdom
5 Kleijnen Systematic Reviews Ltd/ Maastricht University, United Kingdom/ The Netherlands
6 University of Utrecht, The Netherlands
Presenting author and contact person

Presenting author:

Robert Wolff

Contact person:

Abstract text
Background: Quality assessment of included studies is a crucial step in any systematic review (SR). Review and synthesis of prediction modelling studies is a relatively new and evolving area and a tool facilitating quality assessment for prognostic and diagnostic prediction modelling studies is needed. Objectives: To introduce PROBAST (prediction study risk of bias assessment tool), a tool for assessing the risk of bias (RoB) and applicability of prediction modelling studies in a SR. Methods: A Delphi process, involving 42 experts in the field of prediction research, was used until agreement on the content of the final tool was reached. Existing initiatives in the field of prediction research such as the REMARK (reporting recommendations for tumour marker prognostic studies) and TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guidelines formed part of the evidence base for the tool development. The scope of PROBAST was determined with consideration of existing tools, such as QUIPS (quality in prognostic studies) and QUADAS (Quality assessment of diagnostic accuracy studies). Results: After 6 rounds of the Delphi procedure, a final tool was developed that utilises a domain-based structure supported by signalling questions similar to QUADAS-2. PROBAST assesses the RoB and applicability of prediction modelling studies. RoB refers to the likelihood that a prediction model leads to distorted predictive performance for its intended use and targeted individuals. The predictive performance is typically evaluated using calibration, discrimination, and (re)classification. Applicability refers to the extent to which the prediction model from the primary study matches the SR question, for example in terms of the population or outcomes of interest. PROBAST comprises 5 domains (participant selection, outcome, predictors, sample size and flow, and analysis) and 22 signalling questions grouped within the domains. Conclusions: PROBAST can be used to assess the quality of prediction modelling studies included in a SR. The presentation will give an overview of the development process and introduce the final tool.