Autoregressive GAN for Semantic Unconditional Head Motion Generation - Equipe Multimédia
Article Dans Une Revue ACM Transactions on Multimedia Computing, Communications and Applications Année : 2024

Autoregressive GAN for Semantic Unconditional Head Motion Generation

Résumé

In this work, we address the task of unconditional head motion generation to animate still human faces in a low-dimensional semantic space from a single reference pose. Different from traditional audio-conditioned talking head generation that seldom puts emphasis on realistic head motions, we devise a GAN-based architecture that learns to synthesize rich head motion sequences over long duration while maintaining low error accumulation levels.In particular, the autoregressive generation of incremental outputs ensures smooth trajectories, while a multi-scale discriminator on input pairs drives generation toward better handling of high- and low-frequency signals and less mode collapse.We experimentally demonstrate the relevance of the proposed method and show its superiority compared to models that attained state-of-the-art performances on similar tasks.
Fichier principal
Vignette du fichier
SUHMo.pdf (5.66 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03833759 , version 1 (28-10-2022)
hal-03833759 , version 2 (13-04-2023)
hal-03833759 , version 3 (19-07-2023)

Licence

Identifiants

Citer

Louis Airale, Xavier Alameda-Pineda, Stéphane Lathuilière, Dominique Vaufreydaz. Autoregressive GAN for Semantic Unconditional Head Motion Generation. ACM Transactions on Multimedia Computing, Communications and Applications, 2024, pp.1-11. ⟨10.1145/3635154⟩. ⟨hal-03833759v3⟩
645 Consultations
148 Téléchargements

Altmetric

Partager

More