You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
May i know how to use TiedLayerSpec? I want to finetune whisper large v2 using multiple GPU (single node). Embedding layer is used before the transformer decoder and after the transformer layer. According to the documentation, the embedding layer should be wrapped by TiedLayerSpec. But i don't know the working principle of TiedLayerSpec. After wrapping the embedding layer into TiedLayerSpec, how deepspeed reuse the layer at the end of transformer decoder or how should i implement it to let deepspeed to do so. There is too little documentation and explaination on TiedLayerSpec, hope someone can help me. Thank you!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
May i know how to use TiedLayerSpec? I want to finetune whisper large v2 using multiple GPU (single node). Embedding layer is used before the transformer decoder and after the transformer layer. According to the documentation, the embedding layer should be wrapped by TiedLayerSpec. But i don't know the working principle of TiedLayerSpec. After wrapping the embedding layer into TiedLayerSpec, how deepspeed reuse the layer at the end of transformer decoder or how should i implement it to let deepspeed to do so. There is too little documentation and explaination on TiedLayerSpec, hope someone can help me. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions