You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ‘confused with the MPI Allreduce Operator
the paper said that MaTEx-TensorFlow use the allreduce ops to synchronize each layer across ranks.
I think one AllReduce op is to reduce all the gradients generated one the same layer(also returns at the same time )
Only got the graph defined by user, I do not know which layer one node belongs to
so how do I know which gradients are on the same layer?
I mean how do I get the ordered list of reduction ops to do the layer-wise all-to-all reducetion.
I have not find some code do this work in this resouce
MaTEx-Tensorflow handles all of this automatically on the back end, at the time the gradients are computed. There is no need for sample code to use the all-to-all reduction, as the user needs to make no code changes in setting up the Tensorflow graph.
I ‘confused with the MPI Allreduce Operator
the paper said that MaTEx-TensorFlow use the allreduce ops to synchronize each layer across ranks.
I think one AllReduce op is to reduce all the gradients generated one the same layer(also returns at the same time )
Only got the graph defined by user, I do not know which layer one node belongs to
so how do I know which gradients are on the same layer?
I mean how do I get the ordered list of reduction ops to do the layer-wise all-to-all reducetion.
I have not find some code do this work in this resouce
"Example of a MaTEx TensorFlow executing on four MPI ranks. Each rank will run a model replica and communicate
at each of the reduction points (i.e. the orange bars). Each model is initialized identically due to the broadcast operator at the
beginning (i.e the blue bar)."
https://www.researchgate.net/profile/Abhinav_Vishnu/publication/316184213/figure/fig2/AS:484334135189506@1492485664993/Fig-2-Example-of-a-MaTEx-TensorFlow-executing-on-four-MPI-ranks-Each-rank-will-run-a.png
The text was updated successfully, but these errors were encountered: