Agent virtuel expressif pour l'interaction homme-machine06/04/2009
Speaker(s) : Cathérine Pélachaud (CNRS - Télécom ParisTech)
In the past few years we have been working on a system for expressive Embodied Conversational Agents. In particular we have developed a model of multimodal behaviours that includes dynamism and complex facial expressions. The first feature refers to the qualitative execution of behaviours. Our model is based on perceptual studies and encompasses several parameters that modulate the multimodal behaviours. The second feature, the model of complex expressions, follows a componential approach where a new expression is obtained by combining facial areas of other expressions. Lately we have been working on adding temporal dynamism to expressions. Communicating with a virtual humanoid agent implies it should have the capability to speak and to listen. Lately we have started to work on a listener model. In a conversation not only the speaker but also the listener are active participants. Both interactants send verbal and nonverbal messages. Trough facial expression, head movement, or even gaze, the listener indicates what are its attitudes toward the speaker’s say, if it likes or dislikes, agrees or disagrees, understands or not, etc. In this talk we will present on-going work on the creation of a virtual humanoid able to communicate with human users.
More details here …
Thomas.Baerecke (at) nulllip6.fr