Skip to content

Commit

Permalink
create configuration file for MaMMUT training (#521)
Browse files Browse the repository at this point in the history
Summary:

Mostly based on original coca and https://github.com/lucidrains/MaMMUT-pytorch

Update the logics of loading checkpoint for MaMMUT text decoder as well.

Differential Revision:
D52891614

Privacy Context Container: 303860477774201
  • Loading branch information
zhangtemplar authored and facebook-github-bot committed Feb 8, 2024
1 parent 226778e commit e41ac03
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions torchmultimodal/modules/layers/transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -413,10 +413,7 @@ def _forward_prenorm(
self_attn_output = attn_output + hidden_states

# Optional cross-attention
if self.use_cross_attention:
assert (
encoder_hidden_states is not None
), "encoder_hidden_states must be provided for cross attention"
if self.use_cross_attention and encoder_hidden_states is not None:
assert hasattr(
self, "cross_attention_layernorm"
), "Cross-attention layernorm not initialized"
Expand Down

0 comments on commit e41ac03

Please sign in to comment.