You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a 2 GPU system, a 3060 (12gb VRAM) and a 3070ti (8GB). I've read torch supports paralellism that can split large models into both GPUs, it'd be great to have something like that to run big models for hich accuracy on multiple GPUs.
The text was updated successfully, but these errors were encountered:
Hi. As far as I know, when you set up with multiple GPUs, if the GPU types are different, then several problems could occur (CUDA version mismatch, etc.).
For now, I'm listing for possible solutions to refer to later:
Thank you!, it'd be great if I can pass a parameter to app.py to allow me to select which cuda device on my system I can use to run whisper... something like --device cuda0
I think thta will allow me to allocate the large model on my 3060(12gb)
I have a 2 GPU system, a 3060 (12gb VRAM) and a 3070ti (8GB). I've read torch supports paralellism that can split large models into both GPUs, it'd be great to have something like that to run big models for hich accuracy on multiple GPUs.
The text was updated successfully, but these errors were encountered: