Real-time Visual Prosody for Interactive Virtual Agents
Résumé
Speakers accompany their speech with incessant, subtle head
movements. It is important to implement such "visual prosody" in virtual agents, not only to make their behavior more natural, but also because it has been shown to help listeners understand speech. We contribute a visual prosody model for interactive virtual agents that shall be capable
of having live, non-scripted interactions with humans and thus have to use Text-To-Speech rather than recorded speech. We present our method for creating visual prosody online from continuous TTS output, and we report results from three crowdsourcing experiments carried out to see
if and to what extent it can help in enhancing the interaction experience with an agent.