Replies: 1 comment
-
hi i also meet this problem! have you solve it? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I am working on the multi-GPU training example here with nvidia's pytorch docker 23.07 (without changing anything), and a quad GPU machine. However I got the error below. Is there a way to resolve it?
my launch commnad:
My script (only changed a few size parameter to acoomodate flash attention requirement):
Beta Was this translation helpful? Give feedback.
All reactions