Skip to content

fudan-generative-vision/hallo2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

10 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation

Jiahao Cui1*โ€ƒ Hui Li1*โ€ƒ Yao Yao3โ€ƒ Hao Zhu3โ€ƒ Hanlin Shang1โ€ƒ Kaihui Cheng1โ€ƒ Hang Zhou2โ€ƒ
Siyu Zhu1โœ‰๏ธโ€ƒ Jingdong Wang2โ€ƒ
1Fudan Universityโ€ƒ 2Baidu Incโ€ƒ 3Nanjing University


๐Ÿ“ธ Showcase

Tailor Swift Speech @ NYU (4K, 23 minutes) Johan Rockstrom Speech @ TED (4K, 18 minutes)
Churchill's Iron Curtain Speech (4K, 4 minutes) An LLM Course from Stanford (4K, up to 1 hour)

Visit our project page to view more cases.

๐Ÿ“ฐ News

  • 2024/10/16: โœจโœจโœจ Source code and pretrained weights released.
  • 2024/10/10: ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ Paper submitted on Arxiv.

๐Ÿ“…๏ธ Roadmap

Status Milestone ETA
โœ… Paper submitted on Arixiv 2024-10-10
โœ… Source code meet everyone on GitHub 2024-10-16
๐Ÿš€ Accelerate performance on inference TBD

๐Ÿ”ง๏ธ Framework

framework

โš™๏ธ Installation

  • System requirement: Ubuntu 20.04/Ubuntu 22.04, Cuda 11.8
  • Tested GPUs: A100

Download the codes:

  git clone https://github.com/fudan-generative-vision/hallo2
  cd hallo2

Create conda environment:

  conda create -n hallo python=3.10
  conda activate hallo

Install packages with pip

  pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 --index-url https://download.pytorch.org/whl/cu118
  pip install -r requirements.txt

Besides, ffmpeg is also needed:

  apt-get install ffmpeg

๐Ÿ“ฅ Download Pretrained Models

You can easily get all pretrained models required by inference from our HuggingFace repo.

Using huggingface-cli to download the models:

cd $ProjectRootDir
pip install huggingface-cli
huggingface-cli download fudan-generative-ai/hallo --local-dir ./pretrained_models

Or you can download them separately from their source repo:

Finally, these pretrained models should be organized as follows:

./pretrained_models/
|-- audio_separator/
|   |-- download_checks.json
|   |-- mdx_model_data.json
|   |-- vr_model_data.json
|   `-- Kim_Vocal_2.onnx
|-- CodeFormer/
|   |-- codeformer.pth
|   `-- vqgan_code1024.pth
|-- face_analysis/
|   `-- models/
|       |-- face_landmarker_v2_with_blendshapes.task  # face landmarker model from mediapipe
|       |-- 1k3d68.onnx
|       |-- 2d106det.onnx
|       |-- genderage.onnx
|       |-- glintr100.onnx
|       `-- scrfd_10g_bnkps.onnx
|-- facelib
|   |-- detection_mobilenet0.25_Final.pth
|   |-- detection_Resnet50_Final.pth
|   |-- parsing_parsenet.pth
|   |-- yolov5l-face.pth
|   `-- yolov5n-face.pth
|-- hallo2
|   |-- net_g.pth
|   `-- net.pth
|-- motion_module/
|   `-- mm_sd_v15_v2.ckpt
|-- realesrgan
|   `-- RealESRGAN_x2plus.pth
|-- sd-vae-ft-mse/
|   |-- config.json
|   `-- diffusion_pytorch_model.safetensors
|-- stable-diffusion-v1-5/
|   `-- unet/
|       |-- config.json
|       `-- diffusion_pytorch_model.safetensors
`-- wav2vec/
    `-- wav2vec2-base-960h/
        |-- config.json
        |-- feature_extractor_config.json
        |-- model.safetensors
        |-- preprocessor_config.json
        |-- special_tokens_map.json
        |-- tokenizer_config.json
        `-- vocab.json

๐Ÿ› ๏ธ Prepare Inference Data

Hallo has a few simple requirements for input data:

For the source image:

  1. It should be cropped into squares.
  2. The face should be the main focus, making up 50%-70% of the image.
  3. The face should be facing forward, with a rotation angle of less than 30ยฐ (no side profiles).

For the driving audio:

  1. It must be in WAV format.
  2. It must be in English since our training datasets are only in this language.
  3. Ensure the vocals are clear; background music is acceptable.

We have provided some samples for your reference.

๐ŸŽฎ Run Inference

Long-Duration animation

Simply to run the scripts/inference_long.py and change source_image, driving_audio and save_path in the config file:

python scripts/inference_long.py --config ./configs/inference/long.yaml

Animation results will be saved at save_path. You can find more examples for inference at examples folder.

For more options:

usage: inference_long.py [-h] [-c CONFIG] [--source_image SOURCE_IMAGE] [--driving_audio DRIVING_AUDIO] [--pose_weight POSE_WEIGHT]
                    [--face_weight FACE_WEIGHT] [--lip_weight LIP_WEIGHT] [--face_expand_ratio FACE_EXPAND_RATIO]

options:
  -h, --help            show this help message and exit
  -c CONFIG, --config CONFIG
  --source_image SOURCE_IMAGE
                        source image
  --driving_audio DRIVING_AUDIO
                        driving audio
  --pose_weight POSE_WEIGHT
                        weight of pose
  --face_weight FACE_WEIGHT
                        weight of face
  --lip_weight LIP_WEIGHT
                        weight of lip
  --face_expand_ratio FACE_EXPAND_RATIO
                        face region

High-Resolution animation

Simply to run the scripts/video_sr.py and pass input_video and output_path:

python scripts/video_sr.py --input_path [input_video] --output_path [output_dir] --bg_upsampler realesrgan --face_upsample -w 1 -s 4

Animation results will be saved at output_dir.

For more options:

usage: video_sr.py [-h] [-i INPUT_PATH] [-o OUTPUT_PATH] [-w FIDELITY_WEIGHT] [-s UPSCALE] [--has_aligned] [--only_center_face] [--draw_box]
                   [--detection_model DETECTION_MODEL] [--bg_upsampler BG_UPSAMPLER] [--face_upsample] [--bg_tile BG_TILE] [--suffix SUFFIX]

options:
  -h, --help            show this help message and exit
  -i INPUT_PATH, --input_path INPUT_PATH
                        Input video
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
                        Output folder.
  -w FIDELITY_WEIGHT, --fidelity_weight FIDELITY_WEIGHT
                        Balance the quality and fidelity. Default: 0.5
  -s UPSCALE, --upscale UPSCALE
                        The final upsampling scale of the image. Default: 2
  --has_aligned         Input are cropped and aligned faces. Default: False
  --only_center_face    Only restore the center face. Default: False
  --draw_box            Draw the bounding box for the detected faces. Default: False
  --detection_model DETECTION_MODEL
                        Face detector. Optional: retinaface_resnet50, retinaface_mobile0.25, YOLOv5l, YOLOv5n. Default: retinaface_resnet50
  --bg_upsampler BG_UPSAMPLER
                        Background upsampler. Optional: realesrgan
  --face_upsample       Face upsampler after enhancement. Default: False
  --bg_tile BG_TILE     Tile size for background sampler. Default: 400
  --suffix SUFFIX       Suffix of the restored faces. Default: None

NOTICE: The High-Resolution animation feature is a modified version of CodeFormer. When using or redistributing this feature, please comply with the S-Lab License 1.0. We kindly request that you respect the terms of this license in any usage or redistribution of this component.

Training

Long-Duration animation

prepare data for training

The training data, which utilizes some talking-face videos similar to the source images used for inference, also needs to meet the following requirements:

  1. It should be cropped into squares.
  2. The face should be the main focus, making up 50%-70% of the image.
  3. The face should be facing forward, with a rotation angle of less than 30ยฐ (no side profiles).

Organize your raw videos into the following directory structure:

dataset_name/
|-- videos/
|   |-- 0001.mp4
|   |-- 0002.mp4
|   |-- 0003.mp4
|   `-- 0004.mp4

You can use any dataset_name, but ensure the videos directory is named as shown above.

Next, process the videos with the following commands:

python -m scripts.data_preprocess --input_dir dataset_name/videos --step 1
python -m scripts.data_preprocess --input_dir dataset_name/videos --step 2

Note: Execute steps 1 and 2 sequentially as they perform different tasks. Step 1 converts videos into frames, extracts audio from each video, and generates the necessary masks. Step 2 generates face embeddings using InsightFace and audio embeddings using Wav2Vec, and requires a GPU. For parallel processing, use the -p and -r arguments. The -p argument specifies the total number of instances to launch, dividing the data into p parts. The -r argument specifies which part the current process should handle. You need to manually launch multiple instances with different values for -r.

Generate the metadata JSON files with the following commands:

python scripts/extract_meta_info_stage1.py -r path/to/dataset -n dataset_name
python scripts/extract_meta_info_stage2.py -r path/to/dataset -n dataset_name

Replace path/to/dataset with the path to the parent directory of videos, such as dataset_name in the example above. This will generate dataset_name_stage1.json and dataset_name_stage2.json in the ./data directory.

Training

Update the data meta path settings in the configuration YAML files, configs/train/stage1.yaml and configs/train/stage2_long.yaml:

#stage1.yaml
data:
  meta_paths:
    - ./data/dataset_name_stage1.json

#stage2.yaml
data:
  meta_paths:
    - ./data/dataset_name_stage2.json

Start training with the following command:

accelerate launch -m \
  --config_file accelerate_config.yaml \
  --machine_rank 0 \
  --main_process_ip 0.0.0.0 \
  --main_process_port 20055 \
  --num_machines 1 \
  --num_processes 8 \
  scripts.train_stage1 --config ./configs/train/stage1.yaml
Accelerate Usage Explanation

The accelerate launch command is used to start the training process with distributed settings.

accelerate launch [arguments] {training_script} --{training_script-argument-1} --{training_script-argument-2} ...

Arguments for Accelerate:

  • -m, --module: Interpret the launch script as a Python module.
  • --config_file: Configuration file for Hugging Face Accelerate.
  • --machine_rank: Rank of the current machine in a multi-node setup.
  • --main_process_ip: IP address of the master node.
  • --main_process_port: Port of the master node.
  • --num_machines: Total number of nodes participating in the training.
  • --num_processes: Total number of processes for training, matching the total number of GPUs across all machines.

Arguments for Training:

  • {training_script}: The training script, such as scripts.train_stage1 or scripts.train_stage2.
  • --{training_script-argument-1}: Arguments specific to the training script. Our training scripts accept one argument, --config, to specify the training configuration file.

For multi-node training, you need to manually run the command with different machine_rank on each node separately.

For more settings, refer to the Accelerate documentation.

High-Resolution animation

Training

prepare data for training

We use the VFHQ dataset for training, you can download from its homepage. Then updata dataroot_gt in ./configs/train/video_sr.yaml.

training

Start training with the following command:

python -m torch.distributed.launch --nproc_per_node=8 --master_port=4322 \
basicsr/train.py -opt ./configs/train/video_sr.yaml \
--launcher pytorch

๐Ÿ“ Citation

If you find our work useful for your research, please consider citing the paper:

@misc{cui2024hallo2,
	title={Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation},
	author={Jiahao Cui and Hui Li and Yao Yao and Hao Zhu and Hanlin Shang and Kaihui Cheng and Hang Zhou and Siyu Zhu and๏ธ Jingdong Wang},
	year={2024},
	eprint={2410.07718},
	archivePrefix={arXiv},
	primaryClass={cs.CV}
}

๐ŸŒŸ Opportunities Available

Multiple research positions are open at the Generative Vision Lab, Fudan University! Include:

  • Research assistant
  • Postdoctoral researcher
  • PhD candidate
  • Master students

Interested individuals are encouraged to contact us at siyuzhu@fudan.edu.cn for further information.

โš ๏ธ Social Risks and Mitigations

The development of portrait image animation technologies driven by audio inputs poses social risks, such as the ethical implications of creating realistic portraits that could be misused for deepfakes. To mitigate these risks, it is crucial to establish ethical guidelines and responsible use practices. Privacy and consent concerns also arise from using individuals' images and voices. Addressing these involves transparent data usage policies, informed consent, and safeguarding privacy rights. By addressing these risks and implementing mitigations, the research aims to ensure the responsible and ethical development of this technology.

๐Ÿค— Acknowledgements

We would like to thank the contributors to the magic-animate, AnimateDiff, ultimatevocalremovergui, AniPortrait and Moore-AnimateAnyone repositories, for their open research and exploration.

If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.

๐Ÿ‘ Community Contributors

Thank you to all the contributors who have helped to make this project better!