Skip to content

Commit

Permalink
docs fix
Browse files Browse the repository at this point in the history
Signed-off-by: Pawel Gadzinski <pgadzinski@nvidia.com>
  • Loading branch information
Pawel Gadzinski committed Sep 26, 2024
1 parent 535000f commit f72c79f
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion transformer_engine/pytorch/attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -7853,7 +7853,7 @@ class MultiheadAttention(torch.nn.Module):
bias : bool, default = `True`
if set to `False`, the transformer layer will not learn any additive biases.
device : Union[torch.device, str], default = "cuda"
The device on which the parameters of the model will allocated. It is the user's
The device on which the parameters of the model will be allocated. It is the user's
responsibility to ensure all parameters are moved to the GPU before running the
forward pass.
qkv_format: str, default = `sbhd`
Expand Down
2 changes: 1 addition & 1 deletion transformer_engine/pytorch/transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ class TransformerLayer(torch.nn.Module):
Type of activation used in MLP block.
Options are: 'gelu', 'relu', 'reglu', 'geglu', 'swiglu', 'qgelu' and 'srelu'.
device : Union[torch.device, str], default = "cuda"
The device on which the parameters of the model will allocated. It is the user's
The device on which the parameters of the model will be allocated. It is the user's
responsibility to ensure all parameters are moved to the GPU before running the
forward pass.
attn_input_format: {'sbhd', 'bshd'}, default = 'sbhd'
Expand Down

0 comments on commit f72c79f

Please sign in to comment.