Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference is not using GPU #108

Open
smartinezbragado opened this issue Feb 15, 2024 · 1 comment
Open

Inference is not using GPU #108

smartinezbragado opened this issue Feb 15, 2024 · 1 comment

Comments

@smartinezbragado
Copy link

Hello,

I deployed the inference pipeline in a GPU provider. However, the song generation takes too long (5 mins for a 4 min song), which is much longer that what I am reading in the threads. I found that the GPU is barely used in the generation. Probably that is the issue.

Do you know what I might be missing?

Thanks

@JackismyShephard
Copy link

What I have noticed is that what is really taking up a lot of time is the preprocessing of songs, i.e. vocal-instruments separation, vocal-background-vocal separation and vocal denoising. Under the hood this is done using MDX-net models. I tried performing the same conversions in the UVR app manually and it is noticeably faster, so there might be room for some improvement here. I am working on it myself (but progress is slow).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants