U.S. flag An official website of the United States government

Validation Framework for Epidemiological Models

Catalog of Regulatory Science Tools to Help Assess New Medical Devices 

 

This regulatory science tool presents Python-based software for evaluating computer models that predict the number of COVID-19 deaths or hospitalizations expected in a specific locality.

 

Technical Description

This tool comprises Python software for retrospective validation of epidemiological models, such as models that predict number of COVID-19 deaths or hospitalizations that will occur in a particular locality in the near future. The tool provides a framework for quantifying accuracy of model predictions, including quantification of how accurate the model was at predicting important quantities including date of peak, magnitude of peak, and time to recovery. The tool requires specification of a ground truth dataset (such as actual recorded deaths due to COVID-19) and specification of model predictions, including model release dates and predictions for each release. The tool can then be used to analyze the noisy ground truth data and infer, using Bayesian statistics, the true date of peak, magnitude of peak, and time to recovery. Next, the tool can be used to characterize the model’s accuracy in predicting these quantities. The output of the tool is a set of validation scores that together characterize the predictive performance the model. The tool was used to perform a comprehensive analysis of COVID-19 models in Dautel et al., Validation Framework for Epidemiological Models with Application to COVID-19 Models, PLOS Computational Biology 2023. Full details on the method including definitions of each of the validation scores is provided there.

Intended Purpose

The tool is intended to quantify the predictive accuracy of epidemiological models, including COVID-19 models. Results provided by this tool are relevant to public health decision makers who utilize predictions from mathematical models when forming public health policy. Results are also relevant to organizations using downstream models, such as models that predict medical device demand during surges based on epidemiological model predictions. Tools user requirements include Python programming, object-oriented programing, and Bayesian statistics.

Testing

Tool functionality has been comprehensively tested through a set of unit tests that are included in the Python package and can be re-run by new users. These tests confirm that each component within the tool has been implemented correctly and runs as intended on new machines. The overall process of applying the tool to evaluate the predictive accuracy of an epidemiological model is presented in Dautel et al., PLOS Computational Biology 2023.

Limitations

  • Only retrospective validation is possible since a ground truth dataset of reported values is required to be able to apply the workflow. Therefore, the validation framework cannot be used to validate a new model in the early stages of a new epidemic. However, validating previously developed models provides information regarding the reliability of specific modeling approaches, which could be used to support models in future epidemics.
  • There are some manual stages required in the overall validation workflow, including identification of ‘peak events’ (see Dautel et al., PLOS Computational Biology 2023 for full details).

Supporting Documentation

Contact

Tool Reference

  • In addition to citing relevant publications please reference the use of this tool using RST24DP01.01