Weighted Emprirical Risk Minimization: Transfer Learning based on Importance Sampling
Résumé
We consider statistical learning problems, when the distribution P 0 of the training observations Z01, . . . , Z0n differs from the distribution P involved in the
risk one seeks to minimize (referred to as the test distribution) but is still defined on
the same measurable space as P and dominates it. In the unrealistic case where the
likelihood ratio Φ(z) = dP/dP0 (z) is known, one may straightforwardly extends the
Empirical Risk Minimization (ERM) approach to this specific transfer learning setup
using the same idea as that behind Importance Sampling, by minimizing a weighted
version of the empirical risk functional computed from the ’biased’ training data Z0 iwith weights Φ(Z0i). Although the importance function Φ(z) is generally unknown in
practice, we show that, in various situations frequently encountered in practice, it
takes a simple form and can be directly estimated from the Z 0 i ’s and some auxiliary
information on the statistical population P. By means of linearization techniques,
we then prove that the generalization capacity of the approach aforementioned is
preserved when plugging the resulting estimates of the Φ(Z0i)’s into the weighted
empirical risk. Beyond these theoretical guarantees, numerical results provide strong
empirical evidence of the relevance of the approach promoted in this article.