Few-shot Semantic Image Synthesis with Class Affinity Transfer - Télécom Paris Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Few-shot Semantic Image Synthesis with Class Affinity Transfer

Résumé

Semantic image synthesis aims to generate photo realistic images given a semantic segmentation map. Despite much recent progress, training them still requires large datasets of images annotated with per-pixel label maps that are extremely tedious to obtain. To alleviate the high annotation cost, we propose a transfer method that leverages a model trained on a large source dataset to improve the learning ability on small target datasets via estimated pairwise relations between source and target classes. The class affinity matrix is introduced as a first layer to the source model to make it compatible with the target label maps, and the source model is then further finetuned for the target domain. To estimate the class affinities we consider different approaches to leverage prior knowledge: semantic segmentation on the source domain, textual label embeddings, and self-supervised vision features. We apply our approach to GAN-based and diffusion-based architectures for semantic synthesis. Our experiments show that the different ways to estimate class affinity can be effectively combined, and that our approach significantly improves over existing state-of-the-art transfer approaches for generative image models.

Dates et versions

hal-04205025 , version 1 (12-09-2023)

Identifiants

Citer

Marlène Careil, Jakob Verbeek, Stéphane Lathuilière. Few-shot Semantic Image Synthesis with Class Affinity Transfer. IEEE Conference on Computer Vision and Pattern Recognition, 2023, Vancouver (BC), Canada. ⟨hal-04205025⟩
6 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More