Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference speed #9

Open
resurgo97 opened this issue Sep 15, 2024 · 2 comments
Open

Inference speed #9

resurgo97 opened this issue Sep 15, 2024 · 2 comments

Comments

@resurgo97
Copy link

Hi, I appreciate your work so much, it would be so useful for my personal research.

I tested your code on my 60-frame videos, and it took about 3-5 min to complete.
I guess this makes sense since DepthCrafter is based on (maybe multi-step) denoising technique,
but I want to confirm if if falls in the expected range of runtime.

Thanks!

@wbhu
Copy link
Collaborator

wbhu commented Sep 16, 2024

What's the GPU type, we reported the inference speed on A100 in README, you may refer to that numbers

@resurgo97
Copy link
Author

Thanks for your reply!

I ran it on a single RTX 3090.
FPS is around 0.3 FPS, about an order of magnitude slower than yours.
I guess the slowdown is extreme due to the low VRAM size of mine as you mentioned that it requires at least 26GB in README.
Still, I'd love to hear other people's experiences to make sure if it's really the case.

[result]
image

[output from nvidia-smi]
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants