Empirical Risk Minimization under Random Censorship - Télécom Paris
Article Dans Une Revue Journal of Machine Learning Research Année : 2022

Empirical Risk Minimization under Random Censorship

Résumé

We consider the classic supervised learning problem where a continuous non-negative random label Y (e.g. a random duration) is to be predicted based upon observing a random vector X valued in R d with d ≥ 1 by means of a regression rule with minimum least square error. In various applications, ranging from industrial quality control to public health through credit risk analysis for instance, training observations can be right censored, meaning that, rather than on independent copies of (X, Y), statistical learning relies on a collection of n ≥ 1 independent realizations of the triplet (X, min{Y, C}, δ), where C is a nonnegative random variable with unknown distribution, modelling censoring and δ = I{Y ≤ C} indicates whether the duration is right censored or not. As ignoring censoring in the risk computation may clearly lead to a severe underestimation of the target duration and jeopardize prediction, we consider a plug-in estimate of the true risk based on a Kaplan-Meier estimator of the conditional survival function of the censoring C given X, referred to as Beran risk, in order to perform empirical risk minimization. It is established, under mild conditions, that the learning rate of minimizers of this biased/weighted empirical risk functional is of order O P (log(n)/n) when ignoring model bias issues inherent to plug-in estimation, as can be attained in absence of censoring. Beyond theoretical results, numerical experiments are presented in order to illustrate the relevance of the approach developed.
Fichier principal
Vignette du fichier
19-450.pdf (569.76 Ko) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03559365 , version 1 (06-02-2022)

Identifiants

Citer

Guillaume Ausset, Stéphan Clémençon, François Portier. Empirical Risk Minimization under Random Censorship. Journal of Machine Learning Research, 2022, 23 (1), pp.168-226. ⟨10.5555/3586589.3586594⟩. ⟨hal-03559365⟩
189 Consultations
78 Téléchargements

Altmetric

Partager

More