HAvatar: High-fidelity Head Avatar via Facial Model ConditionedNeural Radiance Field
Xiaochen Zhao, Lizhen Wang, Jingxiang Sun, Hongwen Zhang, Jinli Suo, Yebin Liu
Abstract: The problem of modeling an animatable 3D human head avatar under light-weight setups is of significant importance but has not been well solved. Existing 3D representations either perform well in the realism of portrait images synthesis or the accuracy of expression control, but not both. To address the problem, we introduce a novel hybrid explicit-implicit 3D representation, Facial Model Conditioned Neural Radiance Field, which integrates the expressiveness of NeRF and the prior information from the parametric template. At the core of our representation, a synthetic-renderings-based condition method is proposed to fuse the prior information from the parametric model into the implicit field without constraining its topological flexibility. Besides, based on the hybrid representation, we properly overcome the inconsistent shape issue presented in existing methods and improve the animation stability. Moreover, by adopting an overall GAN-based architecture using an image-to-image translation network, we achieve high-resolution, realistic and view-consistent synthesis of dynamic head appearance. Experiments demonstrate that our method can achieve state-of-the-art performance for 3D head avatar animation compared with previous methods.
cd model/op
python setup.py install
We provide a processed demo dataset. Please download and unzip it into data/demo
.
We also provide preprocessing code in data_preprocessing
. If you want to generate dataset from the video, please download faceverse file (data_preprocessing/metamodel/v3/faceverse_v3_1.npy
) and RVM pretrained models (data_preprocessing/BgMatting_models/rvm_resnet50_fp32.torchscript
).
cd data_preprocessing
python fit_video.py --video_path path/to/your/video --base_dir data/avatar
# Stage one
python train_avatar.py --datadir data/demo --logdir logs/demo
After convergence (we train about 20000 steps in this case, you can check the loss with tensorboard), continue with the second training stage. To accelerate the convergence, we provide a pretrained image translation module. Please download an put it into pretrained_models
.
# Stage two
python train_avatarHD.py --datadir data/demo --logdir logs/demo/HD --ckpt logs/demo/checkpoint200000.ckpt
We provide a pretrained monocular head avatar checkpoint, please download and put it into logs/demo/HD
.
python avatarHD_reenactment.py --torch_test --savedir results/demo/self-recon --ckpt logs/demo/HD/latest.pt --split data/demo/sv_v31_all.json
# preprocess dataset
cd data_preprocessing
python fit_video.py --video_path path/to/your/actor_video --base_dir data/actor --avatar_tracking_dir data/demo
python avatarHD_reenactment.py --savedir results/demo/cross-reenact --ckpt logs/demo/HD/latest.pt --split data/actor/drive_demo.json
@article{zhao2023havatar,
author = {Zhao, Xiaochen and Wang, Lizhen and Sun, Jingxiang and Zhang, Hongwen and Suo, Jinli and Liu, Yebin},
title = {HAvatar: High-Fidelity Head Avatar via Facial Model Conditioned Neural Radiance Field},
year = {2023},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
issn = {0730-0301},
url = {https://doi.org/10.1145/3626316},
doi = {10.1145/3626316},
note = {Just Accepted},
journal = {ACM Trans. Graph.},
month = {oct},
keywords = {parametric facial model, image-to-image translation, image synthesis, head avatar, neural radiance field}
}
Part of the code is borrowed from Nerface and StyleAvatar.