Modelling multimodal expression of emotion in a virtual agent

C Pelachaud - … Transactions of the Royal Society B …, 2009 - royalsocietypublishing.org
Philosophical Transactions of the Royal Society B …, 2009royalsocietypublishing.org
Over the past few years we have been developing an expressive embodied conversational
agent system. In particular, we have developed a model of multimodal behaviours that
includes dynamism and complex facial expressions. The first feature refers to the qualitative
execution of behaviours. Our model is based on perceptual studies and encompasses
several parameters that modulate multimodal behaviours. The second feature, the model of
complex expressions, follows a componential approach where a new expression is obtained …
Over the past few years we have been developing an expressive embodied conversational agent system. In particular, we have developed a model of multimodal behaviours that includes dynamism and complex facial expressions. The first feature refers to the qualitative execution of behaviours. Our model is based on perceptual studies and encompasses several parameters that modulate multimodal behaviours. The second feature, the model of complex expressions, follows a componential approach where a new expression is obtained by combining facial areas of other expressions. Lately we have been working on adding temporal dynamism to expressions. So far they have been designed statically, typically at their apex. Only full-blown expressions could be modelled. To overcome this limitation, we have defined a representation scheme that describes the temporal evolution of the expression of an emotion. It is no longer represented by a static definition but by a temporally ordered sequence of multimodal signals.
royalsocietypublishing.org