FD2Talk: Towards Generalized Talking Head Generation with Facial Decoupled Diffusion Model
Proceedings of the 32nd ACM International Conference on Multimedia, 2024•dl.acm.org
Talking head generation is a significant research topic that still faces numerous challenges.
Previous works often adopt generative adversarial networks or regression models, which
are plagued by generation quality and average facial shape problem. Although diffusion
models show impressive generative ability, their exploration in talking head generation
remains unsatisfactory. This is because they either solely use the diffusion model to obtain
an intermediate representation and then employ another pre-trained renderer, or they …
Previous works often adopt generative adversarial networks or regression models, which
are plagued by generation quality and average facial shape problem. Although diffusion
models show impressive generative ability, their exploration in talking head generation
remains unsatisfactory. This is because they either solely use the diffusion model to obtain
an intermediate representation and then employ another pre-trained renderer, or they …
Talking head generation is a significant research topic that still faces numerous challenges. Previous works often adopt generative adversarial networks or regression models, which are plagued by generation quality and average facial shape problem. Although diffusion models show impressive generative ability, their exploration in talking head generation remains unsatisfactory. This is because they either solely use the diffusion model to obtain an intermediate representation and then employ another pre-trained renderer, or they overlook the feature decoupling of complex facial details, such as expressions, head poses and appearance textures. Therefore, we propose a Facial Decoupled Diffusion model for Talking head generation called FD2Talk, which fully leverages the advantages of diffusion models and decouples the complex facial details through multi-stages. Specifically, we separate facial details into motion and appearance. In the initial phase, we design the Diffusion Transformer to accurately predict motion coefficients from raw audio. These motions are highly decoupled from appearance, making them easier for the network to learn compared to high-dimensional RGB images. Subsequently, in the second phase, we encode the reference image to capture appearance textures. The predicted facial and head motions and encoded appearance then serve as the conditions for the Diffusion UNet, guiding the frame generation. Benefiting from decoupling facial details and fully leveraging diffusion models, extensive experiments substantiate that our approach excels in enhancing image quality and generating more accurate and diverse results compared to previous state-of-the-art methods.
ACM Digital Library