You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It works well when I use one or two gups to train with batch_size =1 or 2. However, it will be killed when I use three or four GPUs with batch_size 3 or 4. given that the per GPU memory is around 12G.
I don't know if I forget to set any parameters.
Can anyone do me a favor, if you met this before?
Thanks!
The text was updated successfully, but these errors were encountered:
Have you ever run into this issue?
It works well when I use one or two gups to train with batch_size =1 or 2. However, it will be killed when I use three or four GPUs with batch_size 3 or 4. given that the per GPU memory is around 12G.
I don't know if I forget to set any parameters.
Can anyone do me a favor, if you met this before?
Thanks!
The text was updated successfully, but these errors were encountered: