Replies: 2 comments
-
I met with the same question |
Beta Was this translation helpful? Give feedback.
0 replies
-
I failed to start training because of the Llama3Tokenizer seems to have some problems conflicting with what Megatron core defined... |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Your question
The error message during the execution of the llama3 training .sh is as follows:
It seems that there is an issue with the megatron format checkpoint. The checkpoint_args:
checkpoint_args:Namespace(num_layers=32, encoder_num_layers=32, decoder_num_layers=None, hidden_size=4096, ffn_hidden_size=14336, num_attention_heads=32, kv_channels=128, group_query_attention=True, num_query_groups=8, max_position_embeddings=8192, position_embedding_type='rope', use_rotary_position_embeddings=True, rotary_base=10000, rotary_percent=1.0, rotary_interleaved=False, rotary_seq_len_interpolation_factor=None, add_position_embedding=False, make_vocab_size_divisible_by=16032, normalization='RMSNorm', norm_epsilon=1e-05, apply_layernorm_1p=False, apply_residual_connection_post_layernorm=False, openai_gelu=False, squared_relu=False, swiglu=True, onnx_safe=None, bert_binary_head=True, untie_embeddings_and_output_weights=True, attention_dropout=0.1, hidden_dropout=0.1, weight_decay=0.01, start_weight_decay=0.01, end_weight_decay=0.01, weight_decay_incr_style='constant', clip_grad=1.0, adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-08, sgd_momentum=0.9, micro_batch_size=1, global_batch_size=1024, rampup_batch_size=None, recompute_granularity=None, check_for_nan_in_loss_and_grad=True, distribute_saved_activations=False, recompute_method=None, recompute_num_layers=None, clone_scatter_output_in_embedding=True, profile=False, profile_step_start=10, profile_step_end=12, profile_ranks=[0], tp_comm_overlap=False, tp_comm_overlap_cfg=None, tp_comm_overlap_ag=True, tp_comm_overlap_rs=True, tp_comm_overlap_rs_dgrad=False, tp_comm_bulk_dgrad=True, tp_comm_bulk_wgrad=True, use_cpu_initialization=True, empty_unused_memory_level=0, deterministic_mode=False, check_weight_hash_across_dp_replicas_interval=None, calculate_per_token_loss=False, train_iters=None, train_samples=None, log_interval=100, exit_interval=None, exit_duration_in_mins=None, exit_signal_handler=False, tensorboard_dir=None, masked_softmax_fusion=False, bias_gelu_fusion=False, bias_swiglu_fusion=True, bias_dropout_fusion=False, apply_rope_fusion=True, cross_entropy_loss_fusion=False, use_flash_attn=False, add_bias_linear=False, add_qkv_bias=False, optimizer='adam', dataloader_type='single', async_tensor_model_parallel_allreduce=False, no_persist_layer_norm=False, sequence_parallel=False, gradient_accumulation_fusion=True, deprecated_use_mcore_models=False, use_legacy_models=False, manual_gc=False, manual_gc_interval=0, manual_gc_eval=True, tp_comm_split_ag=True, tp_comm_split_rs=True, seed=1234, data_parallel_random_init=False, init_method_std=0.02, init_method_xavier_uniform=False, lr=None, lr_decay_style='linear', lr_wsd_decay_style='exponential', lr_decay_iters=None, lr_decay_samples=None, lr_wsd_decay_samples=None, lr_wsd_decay_iters=None, lr_warmup_fraction=None, lr_warmup_iters=0, lr_warmup_samples=0, lr_warmup_init=0.0, min_lr=0.0, override_opt_param_scheduler=False, use_checkpoint_opt_param_scheduler=False, decoupled_lr=None, decoupled_min_lr=None, save='/workspace/wangws/Megatron-LM/models/llama-3-8b-hf-v0.3-tp8-pp1/', save_interval=1, no_save_optim=True, no_save_rng=True, load=None, no_load_optim=True, no_load_rng=True, finetune=False, pretrained_checkpoint=None, ckpt_step=None, perform_initialization=False, use_checkpoint_args=False, exit_on_missing_checkpoint=False, use_dist_ckpt=False, auto_detect_ckpt_format=False, dist_ckpt_format='torch_dist', ckpt_fully_parallel_save_deprecated=False, ckpt_fully_parallel_save=True, async_save=None, ckpt_fully_parallel_load=False, ckpt_assume_constant_structure=False, fp16=False, bf16=False, loss_scale=None, initial_loss_scale=4294967296, min_loss_scale=1.0, loss_scale_window=1000, hysteresis=2, fp32_residual_connection=False, apply_query_key_layer_scaling=False, attention_softmax_in_fp32=False, accumulate_allreduce_grads_in_fp32=False, fp16_lm_cross_entropy=False, tensor_model_parallel_size=8, pipeline_model_parallel_size=1, pipeline_model_parallel_split_rank=None, num_layers_per_virtual_pipeline_stage=None, overlap_p2p_comm=False, distributed_backend='nccl', distributed_timeout_minutes=10, overlap_grad_reduce=False, delay_grad_reduce=True, ddp_bucket_size=None, ddp_average_in_collective=False, overlap_param_gather=False, delay_param_gather=False, scatter_gather_tensors_in_pipeline=True, use_ring_exchange_p2p=False, local_rank=None, lazy_mpu_init=None, standalone_embedding_stage=False, use_distributed_optimizer=False, context_parallel_size=1, nccl_communicator_config_path=None, use_tp_pp_dp_mapping=False, eval_iters=100, eval_interval=1000, test_mode=False, skip_train=False, data_path=[], split=None, train_data_path=None, valid_data_path=None, test_data_path=None, data_cache_path=None, mmap_bin_files=True, mock_data=False, vocab_size=128256, vocab_file=None, merge_file=None, vocab_extra_ids=0, seq_length=4096, encoder_seq_length=4096, decoder_seq_length=None, retriever_seq_length=256, sample_rate=1.0, mask_prob=0.15, short_seq_prob=0.1, num_workers=2, tokenizer_type='Llama3Tokenizer', tokenizer_model=None, reset_position_ids=False, reset_attention_mask=False, eod_mask_loss=False, create_attention_mask_in_dataloader=True, num_dataset_builder_threads=1, adlr_autoresume=False, adlr_autoresume_interval=1000, ict_head_size=None, biencoder_projection_dim=0, biencoder_shared_query_context_model=False, ict_load=None, bert_load=None, titles_data_path=None, query_in_block_prob=0.1, use_one_sent_docs=False, evidence_data_path=None, retriever_report_topk_accuracies=[], retriever_score_scaling=False, block_data_path=None, embedding_path=None, indexer_batch_size=128, indexer_log_interval=1000, num_classes=1000, img_h=224, img_w=224, num_channels=3, patch_dim=16, classes_fraction=1.0, data_per_class_fraction=1.0, data_sharding=True, head_lr_mult=1.0, vision_pretraining=False, vision_pretraining_type='classify', vision_backbone_type='vit', swin_backbone_type='tiny', mask_type='random', mask_factor=1.0, iter_per_epoch=1250, dino_local_img_size=96, dino_local_crops_number=10, dino_head_hidden_size=2048, dino_bottleneck_size=256, dino_freeze_last_layer=1, dino_norm_last_layer=False, dino_warmup_teacher_temp=0.04, dino_teacher_temp=0.07, dino_warmup_teacher_temp_epochs=30, qk_layernorm=False, expert_model_parallel_size=1, num_experts=None, moe_router_load_balancing_type='aux_loss', moe_router_topk=2, moe_grouped_gemm=False, moe_aux_loss_coeff=0.0, moe_z_loss_coeff=None, moe_input_jitter_eps=None, moe_token_dispatcher_type='allgather', moe_per_layer_logging=False, moe_expert_capacity_factor=None, moe_pad_expert_input_to_capacity=False, moe_token_drop_policy='probs', moe_layer_recompute=False, moe_extended_tp=False, log_params_norm=False, log_num_zeros_in_grad=False, log_throughput=False, log_progress=False, timing_log_level=0, barrier_with_L1_time=True, timing_log_option='minmax', tensorboard_log_interval=1, tensorboard_queue_size=1000, log_timers_to_tensorboard=False, log_batch_size_to_tensorboard=False, log_learning_rate_to_tensorboard=True, log_loss_scale_to_tensorboard=True, log_validation_ppl_to_tensorboard=False, log_memory_to_tensorboard=False, log_world_size_to_tensorboard=False, wandb_project='', wandb_exp_name='', wandb_save_dir='', enable_one_logger=False, one_logger_project='e2e-tracking', one_logger_entity='hwinf_dcm', one_logger_run_name=None, logging_level=None, log_straggler=False, disable_straggler_on_startup=False, straggler_ctrlr_port=65535, straggler_minmax_count=1, inference_batch_times_seqlen_threshold=512, max_tokens_to_oom=12000, output_bert_embeddings=False, bert_embedder_type='megatron', fp8=None, fp8_margin=0, fp8_interval=1, fp8_amax_history_len=1, fp8_amax_compute_algo='most_recent', fp8_wgrad=True, transformer_impl='transformer_engine', retro_project_dir=None, retro_add_retriever=False, retro_cyclic_train_iters=None, retro_encoder_layers=2, retro_encoder_hidden_dropout=0.1, retro_encoder_attention_dropout=0.1, retro_num_neighbors=2, retro_num_retrieved_chunks=2, retro_attention_gate=1, retro_verify_neighbor_count=True, spec=None, hybrid_attention_ratio=0.0, hybrid_mlp_ratio=0.0, hybrid_override_pattern=None, yaml_cfg=None, rank=0, world_size=8, transformer_pipeline_model_parallel_size=1, data_parallel_size=1, virtual_pipeline_model_parallel_size=None, params_dtype=torch.float32, consumed_train_samples=0, consumed_valid_samples=0, variable_seq_lengths=False, blendable_index_path=None, model_type=<ModelType.encoder_or_decoder: 1>, padded_vocab_size=128256)
Error code after commenting out in Megatron-LM/megatron/training/checkpointing.py, line 78 is to compare the input arg and arg in checkpointas:
After commenting out, the training script could run, but the train_loss at beginning is too high as follow:
[2024-07-17 07:05:35] iteration 2/ 2000 | consumed samples: 16 | elapsed time per iteration (ms): 17138.0 | learning rate: 6.250000E-07 | global batch size: 8 | lm loss: 8.240298E+00 | loss scale: 1.0 | grad norm: 105.105 | number of skipped iterations: 0 | number of nan iterations: 0 |
The checkpoint converting .sh is as follow:
I think the reason is due to the lack of position_embedding. There may have been an issue in my checkpoint converting.
I use 8 RTX4090. The docker image is nvcr.io/nvidia/pytorch:24.06-py3.
Beta Was this translation helpful? Give feedback.
All reactions