Skip to content

Commit

Permalink
Updates usage of get_specs API to accept transformer_config instead o…
Browse files Browse the repository at this point in the history
…f transformer_config.num_moe_experts

 NVIDIA/NeMo#9035
  • Loading branch information
terrykong committed Sep 16, 2024
1 parent 745d148 commit 629c862
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion nemo_aligner/models/nlp/gpt/megatron_gpt_reward_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ def model_provider_func(self, pre_process, post_process):

model = GPTRewardModel(
config=self.transformer_config,
transformer_layer_spec=get_specs(self.spec_name, self.transformer_config.num_moe_experts),
transformer_layer_spec=get_specs(self.spec_name, self.transformer_config),
vocab_size=self.cfg.get("override_vocab_size", self.padded_vocab_size),
max_sequence_length=self.cfg.get("encoder_seq_length", 512),
pre_process=pre_process,
Expand Down

0 comments on commit 629c862

Please sign in to comment.