Slower Training Time with Dual GPUs Compared to Single GPU please help #1231
Unanswered
TickToTock
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am experiencing slower training times when using two 2080 Ti GPUs compared to a single 2080 Ti GPU for the same Sovits dataset. Here are the details:
Setup: Training on a single 2080 Ti GPU vs. dual 2080 Ti GPUs.
Batch Size and Training Parameters: Identical for both setups.
Observation: When using dual 2080 Ti GPUs, CPU RAM usage doubles, and both GPUs show consistent GPU memory usage compared to single GPU training. However, the training speed has decreased.
Single GPU: Approximately 290 seconds per epoch.
Dual GPUs: Approximately 373 seconds per epoch, despite doubling resource and power consumption.
Questions:
What are the main advantages of multi-GPU training?
Are there any potential issues with my training parameters or setup that might be causing the slower performance?
I would appreciate any help or insights on this issue. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions