Poster Presentation NSW State Cancer Conference 2023

Systematic review of methodological conduct and risk of bias of Head and Neck cancer outcome prediction models, with recommendations for clinical practice. (#208)

Farhannah Aly 1 2 3 , Christian Roenn Hansen 4 5 6 7 , Daniel Al Mouiee 1 2 3 , Purnima Sundaresan 8 9 , Ali Haidar 1 3 , Shalini Vinod 1 2 , Lois Holloway 1 2 3 7
  1. Southwest Sydney Clinical Campus, University of New South Wales, Sydney, NSW, Australia
  2. Liverpool and Macarthur Cancer Therapy Centres, Southwest Sydney Local Health District, Sydney, New South Wales, Australia
  3. Medical Physics Research Group, Ingham Institute For Applied Medical Research, Liverpool, NSW, Australia
  4. Laboratory of Radiation Physics , Odense University Hospital, Odense, Denmark
  5. Department of Clinical Research, University of Southern Denmark, Odense, Denmark
  6. Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
  7. Institute of Medical Physics, School of Physics, University of Sydney, Sydney, NSW, Australia
  8. Sydney West Radiation Oncology Network, Western Sydney Local Health District, Sydney, NSW, Australia
  9. Sydney Medical School, The University of Sydney, Sydney, NSW, Australia

Aims: Multiple outcome prediction models have been developed for Head and Neck Squamous Cell Carcinoma (HNSCC). This systematic review aimed to identify HNSCC outcome prediction model studies that can be used at time of treatment-decision making, and assess their methodological quality, thus making recommendations for clinical practice and future studies.

Methods: Inclusion criteria were mucosal HNSCC prognostic prediction model studies (development and/or validation), published between 2000-2022, that incorporated clinically available variables and predicted tumour-related outcomes. Eligible publications were identified from PubMed and Embase. Methodological quality and risk of bias were assessed using the checklist for critical appraisal and data extraction for systematic reviews of prediction modelling studies (CHARMS)1 and prediction model risk of bias assessment tool (PROBAST)2. Eligible publications were categorised by study type for reporting.

Results: 64 eligible publications were identified; 55 publications reported model developments, 37 reported external validations, 28 publications reported both model development and external validation. CHARMS checklist items relating to participants, predictors, outcomes, handling of missing data, and some model development and evaluation procedures were generally well-reported. Less well-reported were measures accounting for model overfitting and model performance measures, especially model calibration. Full model information was poorly reported (three of 55 model developments), specifically model intercept, baseline survival or full model code. Most publications (54 of 55 model developments, 28 of 37 model external validations) were found to be at high risk of bias, predominantly due to methodological issues in the PROBAST participants and analysis domains. Factors contributing to high risk of bias included low sample size, inappropriate categorisation of continuous predictors, excluding participants in the analysis, inappropriate handling of participants with missing data, selecting predictors based on univariable analysis and not appropriately accounting for overfitting, underfitting and optimism in model performance.

Conclusion: Most eligible outcome prediction models contained methodological issues, introducing a high risk of bias, which may affect accuracy in heterogeneous populations. Inadequate reporting of complete model information was also demonstrated. Careful critical appraisal of outcome prediction model publications should be undertaken before clinical implementation. It is also recommended that future prediction model studies use the TRIPOD reporting guidelines3 to ensure that sufficient information is provided to allow for critical appraisal and independent external validation. Independent external validation studies in the local population and demonstration of clinical impact are essential for the clinical implementation of outcome prediction models.

  1. 1. Moons KGM, de Groot JAH, Bouwmeester W, et al. Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies: The CHARMS Checklist. PLoS Med 2014;11:e1001744. https://doi.org/10.1371/journal.pmed.1001744.
  2. 2. Wolff RF, Moons KGM, Riley RD, et al. PROBAST: A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies. Ann Intern Med 2019;170:51. https://doi.org/10.7326/M18-1376.
  3. 3. Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): The TRIPOD Statement. Ann Intern Med 2015;162:55–63. https://doi.org/doi:10.7326/M14-0697.