-
Notifications
You must be signed in to change notification settings - Fork 26.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mismatch with epoch when using gradient_accumulation #31677
Comments
This is the intermediate value that I logged Before change
total_epoch -> 22 (which is not expected!) After change
total_epoch -> 24 |
@muellerzr Hi what do you think about the suggested modification? Is there anything to concern? If not I can open PR! |
Gentle ping @muellerzr |
@muellerzr If you don't have the bandwidth I can try open PR and get some review from you! |
Thanks for the report @SangbumChoi ! I see your point. |
@SunMarc Thanks for the reply.
|
This is indeed a bit confusing. Can you share a minimal reproducer @SangbumChoi, that would be very helpful in order to investigate this further ! |
System Info
transformers
version: 4.43.0.dev0Who can help?
@muellerzr @SunMarc
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
This is the issue of mismatch defined epoch and actual train epoch.
Even though I set 24 epoch in trainarguments and set gradient_accumulation_step as 2. There is a mismatch of calculating max_steps when it is not set.
transformers/src/transformers/trainer.py
Line 1983 in 1c68f2c
Expected behavior
transformers/src/transformers/trainer.py
Line 1975 in 1c68f2c
If we just use normal divider it solves the issue. Is there any specific reason that num_update_steps_per_epoch should be remained as an integer?
num_update_steps_per_epoch = len_dataloader / args.gradient_accumulation_steps
The text was updated successfully, but these errors were encountered: