An Effective Meaningful Way to Evaluate Survival Models

Shi Ang Qi*, Neeraj Kumar, Mahtab Farrokh, Weijie Sun, Li Hao Kuan, Rajesh Ranganath, Ricardo Henao, Russell Greiner*

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

2 Scopus citations

Abstract

One straightforward metric to evaluate a survival prediction model is based on the Mean Absolute Error (MAE) - the average of the absolute difference between the time predicted by the model and the true event time, over all subjects. Unfortunately, this is challenging because, in practice, the test set includes (right) censored individuals, meaning we do not know when a censored individual actually experienced the event. In this paper, we explore various metrics to estimate MAE for survival datasets that include (many) censored individuals. Moreover, we introduce a novel and effective approach for generating realistic semi-synthetic survival datasets to facilitate the evaluation of metrics. Our findings, based on the analysis of the semi-synthetic datasets, reveal that our proposed metric (MAE using pseudo-observations) is able to rank models accurately based on their performance, and often closely matches the true MAE - in particular, is better than several alternative methods.

Original languageEnglish (US)
Pages28244-28276
Number of pages33
StatePublished - 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: Jul 23 2023Jul 29 2023

Conference

Conference40th International Conference on Machine Learning, ICML 2023
Country/TerritoryUnited States
CityHonolulu
Period07/23/2307/29/23

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'An Effective Meaningful Way to Evaluate Survival Models'. Together they form a unique fingerprint.

Cite this