-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-node multi-gpu training #4
Comments
Hi @gxglxy You can take a look at here: https://github.com/atomicarchitects/equiformer_v2/blob/main/oc20/trainer/dist_setup.py#L14-L89 This is the part where we set up distributed training. Depending on how you launch multi-node training, you might need to use this part or this part. Best |
Could you provide instructions on how to run experiments under the multi-node multi-gpu setting without using the submitit? For example, I have 2 nodes, each of which contains 16 gpus. How should I modify the scripts you provide to reproduce the reported results?
Thanks!
The text was updated successfully, but these errors were encountered: