You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If the final training iteration is at the end of the data file, it will be less than the expected batch size (16 or 32), then the final training iteration time will be very small (may be only half of the expected batch size, or less). Then this script will give the wrong performance data.
The code below uses the final iteration training time to calculate the training performance.
https://github.com/IntelAI/models/blob/cdd842a33eb9d402ff18bfb79bd106ae132a8e99/quickstart/language_modeling/pytorch/bert_large/training/gpu/bf16_training_plain_format.sh#L57
If the final training iteration is at the end of the data file, it will be less than the expected batch size (16 or 32), then the final training iteration time will be very small (may be only half of the expected batch size, or less). Then this script will give the wrong performance data.
Suggest setting the parameter "drop_last" in the training code below to drop the final batch data of every data set file.
https://github.com/IntelAI/models/blob/cdd842a33eb9d402ff18bfb79bd106ae132a8e99/models/language_modeling/pytorch/bert_large/training/gpu/run_pretrain_mlperf.py#L904
The text was updated successfully, but these errors were encountered: