We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llamafactory
使用 llamafactory-cli train 时的配置文件如下
### model model_name_or_path: meta-llama/Llama-2-7b-hf ### method stage: sft do_train: true finetuning_type: lora lora_target: q_proj,v_proj ### dataset template: llama2 dataset: ** overwrite_cache: true preprocessing_num_workers: 16 ### output output_dir: saves/** logging_steps: 10 save_steps: 1000 plot_loss: true overwrite_output_dir: true ### train per_device_train_batch_size: 4 gradient_accumulation_steps: 4 learning_rate: 5.0e-5 num_train_epochs: 100.0 lr_scheduler_type: cosine warmup_ratio: 0.0 ddp_timeout: 180000000 fp16: true
训练时间所耗费时间为
使用 torchrun 参数设置为
CUDA_VISIBLE_DEVICES=3,4,5,6 torchrun --standalone --nnodes=1 --nproc-per-node=4 new_LLMs/LLaMA/src/train_bash.py --stage sft --model_name_or_path meta-llama/Llama-2-7b-hf --do_train --dataset_dir LLMs/data --dataset ** --template llama2 --finetuning_type lora --lora_target q_proj,v_proj --output_dir **/checkpoint --overwrite_cache --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --logging_steps 10 --save_steps 1000 --learning_rate 5e-5 --num_train_epochs 100.0 --plot_loss --fp16
运行时间为
为何两者运行时间和效果都不同呢,是哪里的设置有问题吗? 感谢!
No response
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Reminder
System Info
llamafactory
version: 0.9.1.dev0Reproduction
llamafactory-cli train
使用 llamafactory-cli train 时的配置文件如下
训练时间所耗费时间为
torchrun
使用 torchrun 参数设置为
运行时间为
Expected behavior
为何两者运行时间和效果都不同呢,是哪里的设置有问题吗?
感谢!
Others
No response
The text was updated successfully, but these errors were encountered: