Modeling Multimodal Behaviors from Speech Prosody - Télécom Paris
Communication Dans Un Congrès Année : 2013

Modeling Multimodal Behaviors from Speech Prosody

Résumé

Head and eyebrow movements are an important communication mean. They are highly synchronized with speech prosody. Endowing virtual agent with synchronized verbal and nonverbal behavior enhances their communicative performance. In this paper, we propose an animation model for the virtual agent based on a statistical model linking speech prosody and facial movement. A fully parameterized Hidden Markov Model is proposed first to capture the tight relationship between speech and facial movement of a human face extracted from a video corpus and then to drive automatically virtual agent's behaviors from speech signals. The correlation between head and eyebrow movements is also taken into account during the building of the model. Subjective and objective evaluations were conducted to validate this model.

Dates et versions

hal-02412034 , version 1 (15-12-2019)

Identifiants

Citer

Yu Ding, Catherine Pelachaud, Thierry Artières. Modeling Multimodal Behaviors from Speech Prosody. IVA 2013 - 13th International Conference on Intelligent Virtual Agents, Aug 2013, Edinburgh, United Kingdom. pp.217-228, ⟨10.1007/978-3-642-40415-3_19⟩. ⟨hal-02412034⟩
65 Consultations
0 Téléchargements

Altmetric

Partager

More