Skip to content

Commit

Permalink
create configuration file for MaMMUT training (#521)
Browse files Browse the repository at this point in the history
Summary:

Mostly based on original coca and https://github.com/lucidrains/MaMMUT-pytorch

Update the logics of loading checkpoint for MaMMUT text decoder as well.

Differential Revision:
D52891614

Privacy Context Container: 303860477774201
  • Loading branch information
zhangtemplar authored and facebook-github-bot committed Feb 13, 2024
1 parent 7a2a4bd commit 3b6375f
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions torchmultimodal/modules/layers/transformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -413,10 +413,7 @@ def _forward_prenorm(
self_attn_output = attn_output + hidden_states

# Optional cross-attention
if self.use_cross_attention:
assert (
encoder_hidden_states is not None
), "encoder_hidden_states must be provided for cross attention"
if self.use_cross_attention and encoder_hidden_states is not None:
assert hasattr(
self, "cross_attention_layernorm"
), "Cross-attention layernorm not initialized"
Expand Down

0 comments on commit 3b6375f

Please sign in to comment.