Nothing Special   »   [go: up one dir, main page]

Skip to content

code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021

Notifications You must be signed in to change notification settings

wangsuzhen/Audio2Head

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion (IJCAI 2021)

Requirements

  • Python 3.6 , Pytorch >= 1.6 and ffmpeg

  • Other requirements are listed in the 'requirements.txt'

Pretrained Checkpoint

Please download the pretrained checkpoint from google-drive and put it within the folder (/checkpoints).

Generate Demo Results

python inference.py --audio_path xxx.wav --img_path xxx.jpg

Note that the input images must keep the same height and width and the face should be appropriately cropped as in /demo/img.

License and Citation

@InProceedings{wang2021audio2head,
author = Suzhen Wang, Lincheng Li, Yu Ding, Changjie Fan, Xin Yu
title = {Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion},
booktitle = {the 30th International Joint Conference on Artificial Intelligence (IJCAI-21)},
year = {2021},
}

Acknowledgement

This codebase is based on First Order Motion Model, thanks for their contribution.

About

code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages