Multispace Behavioral Model for Face-Based Affective Social Agents
1 Carleton School of Information Technology, Carleton University, Ottawa, ON K1S5B6, Canada
2 School of Interactive Arts & Technology, Simon Fraser University, Surrey, BC V3TOA3, Canada
EURASIP Journal on Image and Video Processing 2007, 2007:048757 doi:10.1155/2007/48757Published: 7 March 2007
This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces: knowledge, personality, and mood. These spaces control a lower-level geometry space that provides parameters at the facial feature level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios.