Learning skeletal articulations with neural blend shapes
ACM Transactions on Graphics (TOG), 2021•dl.acm.org
Animating a newly designed character using motion capture (mocap) data is a long standing
problem in computer animation. A key consideration is the skeletal structure that should
correspond to the available mocap data, and the shape deformation in the joint regions,
which often requires a tailored, pose-specific refinement. In this work, we develop a neural
technique for articulating 3D characters using enveloping with a pre-defined skeletal
structure which produces high quality pose dependent deformations. Our framework learns …
problem in computer animation. A key consideration is the skeletal structure that should
correspond to the available mocap data, and the shape deformation in the joint regions,
which often requires a tailored, pose-specific refinement. In this work, we develop a neural
technique for articulating 3D characters using enveloping with a pre-defined skeletal
structure which produces high quality pose dependent deformations. Our framework learns …
Animating a newly designed character using motion capture (mocap) data is a long standing problem in computer animation. A key consideration is the skeletal structure that should correspond to the available mocap data, and the shape deformation in the joint regions, which often requires a tailored, pose-specific refinement. In this work, we develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure which produces high quality pose dependent deformations. Our framework learns to rig and skin characters with the same articulation structure (e.g., bipeds or quadrupeds), and builds the desired skeleton hierarchy into the network architecture. Furthermore, we propose neural blend shapes - a set of corrective pose-dependent shapes which improve the deformation quality in the joint regions in order to address the notorious artifacts resulting from standard rigging and skinning. Our system estimates neural blend shapes for input meshes with arbitrary connectivity, as well as weighting coefficients which are conditioned on the input joint rotations. Unlike recent deep learning techniques which supervise the network with ground-truth rigging and skinning parameters, our approach does not assume that the training data has a specific underlying deformation model. Instead, during training, the network observes deformed shapes and learns to infer the corresponding rig, skin and blend shapes using indirect supervision. During inference, we demonstrate that our network generalizes to unseen characters with arbitrary mesh connectivity, including unrigged characters built by 3D artists. Conforming to standard skeletal animation models enables direct plug-and-play in standard animation software, as well as game engines.
ACM Digital Library