Character Animation

“Movement is the essence of animation.”

– The Technique of Film Animation, J. Halas and R. Manvell

Creating motion for human-like characters is an interesting and important problem. Despite the versatility of human movement, two types of behaviors are essential: locomotion (moving around in an environment) and manipulation (using hands and arms to manipulate objects).

Locomotion

For locomotion, obstacles in the environment can often be tightly bounded by prisms perpendicular to the floor. We can exploit this fact and project the three-dimensional geometry of the environment and the character to a two-dimensional plane. A trajectory in the plane is then computed and tracked by a controller. We have used this idea in the Human Figure Animation Project to control the behavior of a stick figure whose motion is generated by interpolating motion capture data. The combination of motion planning and motion capture techniques renders motion that is both versatile and realistic for character animation.

Manipulating Objects

Manipulating objects

Quicktime movie (2.7 MB)

Manipulating objects with hands and arms is a difficult task. Due to the complex geometric interaction of the character with the environment, keyframe interpolation, which requires lots of user intervention, is too tedious. More advanced methods such as dynamic simulation and space-time constraints are not applicable. In contrast, motion planners are a perfect candidate for this task. It creates collision-free motion with little user intervention and easily handles task-level commands such as “pick up the apple”.

Geometric model of the character made available by Seamless Solutions, Inc.

The emergence of fast randomized motion planners has allowed the motion to be computed at interactive rates. The movie and the images above demonstrate a sequence of movements generated by our motion planner. We specify only three configurations for the arm: the initial configiration, the grasp configuration, and the final goal. The motion planner then automatically generates the collision-free motion without furthur human intervention.

Unfortunately animation generated solely by motion planners often looks “stiff” as the movie clip shows, because motion planners typically operate on simplified models of characters that capture the the functional, but not the aesthetic aspect of movement. A fruitful approach, we believe, is to build a realistic motion model from captured data through machine learning and then synthesize novel motion through motion planning. We are working on techniques to combine model-driven motion planning and data-driven motion capture to create versatile and realistic animation.

Acknowledgements

This research has been supported partically by ARO MURI grant DAAH04-96-1-007 and a Microsoft Graduate Fellowship. We thank Seamless Solutions, Inc. for making available the character model used in the animation.