Skip to content

Commit

Permalink
Merge pull request #593 from mlcommons/dev
Browse files Browse the repository at this point in the history
Dev -> main
  • Loading branch information
priyakasimbeg authored Nov 28, 2023
2 parents 7598e02 + 8e21b4e commit ca87833
Show file tree
Hide file tree
Showing 2 changed files with 1 addition and 4 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Change Log

## [0.1.0] - 2023-11-21
## algoperf-benchmark-0.1.0 (2023-11-28)

First release of the AlgoPerf: Training algorithms benchmarking code.
3 changes: 0 additions & 3 deletions DOCUMENTATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -577,6 +577,3 @@ The JAX and PyTorch versions of the Criteo, FastMRI, Librispeech, OGBG, and WMT
Since we use PyTorch's [`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel) implementation, there is one Python process for each device. Depending on the hardware and the settings of the cluster, running a TensorFlow input pipeline in each Python process can lead to errors, since too many threads are created in each process. See [this PR thread](https://github.com/mlcommons/algorithmic-efficiency/pull/85) for more details.
While this issue might not affect all setups, we currently implement a different strategy: we only run the TensorFlow input pipeline in one Python process (with `rank == 0`), and [broadcast](https://pytorch.org/docs/stable/distributed.html#torch.distributed.broadcast) the batches to all other devices. This introduces an additional communication overhead for each batch. See the [implementation for the WMT workload](https://github.com/mlcommons/algorithmic-efficiency/blob/main/algorithmic_efficiency/workloads/wmt/wmt_pytorch/workload.py#L215-L288) as an example.

### Pytorch Conformer CUDA OOM

The Conformer PyTorch workload may run out of memory in the current state. Please set the `submission_runner.py` flag `reduce_pytorch_max_split_size` to `True` as a temporary workaround if you encounter this issue. This will set `max_split_size_mb:256`. Note that this will adversely impact the performance of the submission on this workload. See [tracking issue](https://github.com/mlcommons/algorithmic-efficiency/issues/497).

0 comments on commit ca87833

Please sign in to comment.