Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP][LTX Video2Video] start ltx video2video. #10283

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft

Conversation

sayakpaul
Copy link
Member

What does this PR do?

A bit unsure about how to deal with conditioning_mask and whether to consider adding noise to initial_latents like how it's done here:

latents = self.scheduler.add_noise(init_latents, noise, timestep)

Code to test
import torch
from diffusers.pipelines.ltx.pipeline_ltx_video2video import LTXVideoToVideoPipeline
from diffusers.utils import export_to_video, load_video

pipe = LTXVideoToVideoPipeline.from_pretrained("Lightricks/LTX-Video", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()

input_video = load_video(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/hiker.mp4"
)
prompt = (
    "An astronaut stands triumphantly at the peak of a towering mountain. Panorama of rugged peaks and "
    "valleys. Very futuristic vibe and animated aesthetic. Highlights of purple and golden colors in "
    "the scene. The sky is looks like an animated/cartoonish dream of galaxies, nebulae, stars, planets, "
    "moons, but the remainder of the scene is mostly realistic."
)

video = pipe(
    video=input_video, prompt=prompt, guidance_scale=6, num_inference_steps=50
).frames[0]
export_to_video(video, "output_vid2vid.mp4", fps=24)

Input video:

hiker.mp4

Output video:

output_vid2vid.mp4

@a-r-r-o-w I am going to mention you in a couple of places where I am unsure about the implementation detail. LMK.

latents = latents * latents_std / scaling_factor + latents_mean
return latents

def prepare_latents(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@a-r-r-o-w this implementation.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hope this answer some of your questions! Happy to help with anything else 🤗

self.transformer_temporal_patch_size,
)

noise_pred = noise_pred[:, :, 1:]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was only need for image-to-video because we don't want to denoise the first frame (because it is the actual encoded image latent itself)


@torch.no_grad()
@replace_example_docstring(EXAMPLE_DOC_STRING)
def __call__(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a strength parameter here. Without strength, it is not possible to control how much effect of the original video you would like to have in the output video. Low strength -> low denoising steps -> more similar video. And high strength -> more denoising steps -> less similar video. This is more of a naive approach that we use for vid-to-vid, but with techniques like RF-inversion and FlowEdit (which we'll directly add in modular diffusers instead of pipelines), the quality and possibilities are endless!

src/diffusers/pipelines/ltx/pipeline_ltx_video2video.py Outdated Show resolved Hide resolved
src/diffusers/pipelines/ltx/pipeline_ltx_video2video.py Outdated Show resolved Hide resolved
src/diffusers/pipelines/ltx/pipeline_ltx_video2video.py Outdated Show resolved Hide resolved
@sayakpaul
Copy link
Member Author

@a-r-r-o-w thanks for your comments. Didn't know about scale_noise() :D

I have applied the changes you proposed and I am running a sweep over params. Results: https://wandb.ai/sayakpaul/ltx_video2video/runs/pc4lfivy

Sweep
import torch
from diffusers.pipelines.ltx.pipeline_ltx_video2video import LTXVideoToVideoPipeline
from diffusers.utils import export_to_video, load_video
import wandb

pipe = LTXVideoToVideoPipeline.from_pretrained("Lightricks/LTX-Video", torch_dtype=torch.bfloat16).to("cuda")

input_video = load_video(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/hiker.mp4"
)
prompt = (
    "An astronaut stands triumphantly at the peak of a towering mountain. Panorama of rugged peaks and "
    "valleys. Very futuristic vibe and animated aesthetic. Highlights of purple and golden colors in "
    "the scene. The sky is looks like an animated/cartoonish dream of galaxies, nebulae, stars, planets, "
    "moons, but the remainder of the scene is mostly realistic."
)
wandb.init(project="ltx_video2video")

filenames = []
for s in [1.0, 0.8, 0.7, 0.9]:
    for steps in [50, 60, 70]:
        for cfg in [5, 6, 7, 8]:
            video_name = f"strength@{s}_steps@{steps}_cfg@{cfg}.mp4"
            video = pipe(
                video=input_video, prompt=prompt, guidance_scale=cfg, num_inference_steps=steps, strength=s
            ).frames[0]
            export_to_video(video, video_name, fps=24)
            wandb.log(
                {"videos": wandb.Video(video_name, caption=video_name.replace(".mp4", ""), fps=24)}
            )

LMK if you have any concerns on the implementation.

@sayakpaul sayakpaul requested a review from a-r-r-o-w December 19, 2024 10:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants