Skip to content

Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models

License

Notifications You must be signed in to change notification settings

azure-dragon-ai/Vchitect-2.0

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models

1Shanghai Artificial Intelligence Laboratory 

👋 Join our Lark and Discord


Hits Generic badge Generic badge

🔥The technical report is coming soon!

🔥 Update and News

  • [2024.09.14] 🔥 Inference code and checkpoint are released.

😲 Gallery

Installation

1. Create a conda environment and install PyTorch

Note: You may want to adjust the CUDA version according to your driver version.

conda create -n VchitectXL -y
conda activate VchitectXL
conda install python=3.11 pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia -y

2. Install dependencies

pip install -r requirements.txt

Inference

First download the checkpoint.

save_dir=$1
ckpt_path=$2

python inference.py --test_file assets/test.txt --save_dir "${save_dir}" --ckpt_path "${ckpt_path}"

In inference.py, arguments for inference:

  • num_inference_steps: Denoising steps, default is 100
  • guidance_scale: CFG scale to use, default is 7.5
  • width: The width of the output video, default is 768
  • height: The height of the output video, default is 432
  • frames: The number of frames, default is 40

The results below were generated using the example prompt.

A snowy forest landscape with a dirt road running through it. The road is flanked by trees covered in snow, and the ground is also covered in snow. The sun is shining, creating a bright and serene atmosphere. The road appears to be empty, and there are no people or animals visible in the video. The video opens with a breathtaking view of a starry sky and vibrant auroras. The camera pans to reveal a glowing black hole surrounded by swirling, luminescent gas and dust. Below, an enchanted forest of bioluminescent trees glows softly. The scene is a mesmerizing blend of cosmic wonder and magical landscape.

The base T2V model supports generating videos with resolutions up to 720x480 and 8fps. Then,VEnhancer is used to upscale the resolution to 2K and interpolate the frame rate to 24fps.

🔑 License

This code is licensed under Apache-2.0. The framework is fully open for academic research and also allows free commercial usage.

Disclaimer

We disclaim responsibility for user-generated content. The model was not trained to realistically represent people or events, so using it to generate such content is beyond the model's capabilities. It is prohibited for pornographic, violent and bloody content generation, and to generate content that is demeaning or harmful to people or their environment, culture, religion, etc. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards.

About

Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%