Skip to content

Commit

Permalink
blip2 stage1 losses
Browse files Browse the repository at this point in the history
Summary: as title

Reviewed By: ankitade

Differential Revision: D48103991
  • Loading branch information
Peng Chen authored and facebook-github-bot committed Sep 5, 2023
1 parent 1b4f79f commit c59e2d7
Showing 1 changed file with 7 additions and 0 deletions.
7 changes: 7 additions & 0 deletions torchmultimodal/utils/distributed.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,3 +39,10 @@ def gather_tensor(tensor: Tensor, backprop_in_gather: bool = True) -> List[Tenso
all_gather_no_backprop(tensor_all_gpus, tensor)
tensor_all_gpus[torch.distributed.get_rank()] = tensor
return tensor_all_gpus


def get_rank():
"""get rank util for distributed training"""
if torch.distributed.is_available() and torch.distributed.is_initialized():
return torch.distributed.get_rank()
return 0

0 comments on commit c59e2d7

Please sign in to comment.