Laughter Animation Synthesis
Résumé
Laughter is an important communicative signal in human-human communication. However, very few attempts have
been made to model laughter animation synthesis for virtual characters. This paper reports our work to model hilarious laughter. We have developed a generator for face and body motions that takes as input the sequence of pseudo-phonemes of laughter and each pseudo-phoneme's duration time. Lip and jaw movements are further driven by laughter prosodic features. The proposed generator first learns the relationship between input signals (pseudo-phoneme and
acoustic features) and human motions; then the learnt generator can be used to produce automatically laughter animation in real time. Lip and jaw motion synthesis is based on an extension of Gaussian Models, the contextual Gaussian Model. Head and eyebrow motion synthesis is based on selecting and concatenating motion segments from motion
capture data of human laughter while torso and shoulder
movements are driven from head motion by a PD controller.
Our multimodal behaviors generator of laughter has been
evaluated through perceptive study involving the interaction of a human and an agent telling jokes to each other.