Skip to content

Commit

Permalink
create configuration file for MaMMUT training (#521)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #521

Mostly based on original coca and https://github.com/lucidrains/MaMMUT-pytorch

Update the logics of loading checkpoint for MaMMUT text decoder as well.

Differential Revision:
D52891614

Privacy Context Container: 303860477774201

fbshipit-source-id: 192a1826fd59a80bf99e8545408e19938069a599
  • Loading branch information
zhangtemplar authored and facebook-github-bot committed Feb 16, 2024
1 parent 314d3de commit 2cbab1f
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions torchmultimodal/modules/layers/transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -412,10 +412,7 @@ def _forward_prenorm(
self_attn_output = attn_output + hidden_states

# Optional cross-attention
if self.use_cross_attention:
assert (
encoder_hidden_states is not None
), "encoder_hidden_states must be provided for cross attention"
if self.use_cross_attention and encoder_hidden_states is not None:
assert hasattr(
self, "cross_attention_layernorm"
), "Cross-attention layernorm not initialized"
Expand Down

0 comments on commit 2cbab1f

Please sign in to comment.