Skip to content
This repository has been archived by the owner on Jan 25, 2024. It is now read-only.

Training is much slower than you described in paper. #8

Open
zhaone opened this issue Jan 3, 2021 · 4 comments
Open

Training is much slower than you described in paper. #8

zhaone opened this issue Jan 3, 2021 · 4 comments

Comments

@zhaone
Copy link

zhaone commented Jan 3, 2021

Hi, I recently want to reproduce your result and can get the metric your described in paper but I got a problems that the training (almost 3 days) than you described in paper (less than 12 hours).

Environment:

  • 4 * Titan X (same as paper)
  • batch size 128 (4*32, same as paper)
  • change distribution framework from horovod to pytorch DDP since thehorovod framework is really hard to set up (even with official horovod docker I still got some errors I can't resolve)

Did I do something wrong? I'm sure that I use DDP correctly and also sure that the bottleneck of training speed is optimization (not IO or something else). Have others met the same problems like me?

@MasterIzumi
Copy link

@zhaone Hi, have you found the reason?
I just tried to train the network using the default settings, and I also found the training is around twice as slow as the paper described. It cost 14 hours for 20 epochs (11.5 hours for 36 epochs in the paper).

Here's my environment: 4 * Titan RTX, batch size 128 (4*32), distributed training using Horovod.

Btw, one more thing I notice is that my log shows one epoch takes over 2440 while ~900 in the provided log file, and in #2 they report ~1200 (4 * RTX2080Ti). But the evaluation results are similar.

Here's my training log:

Epoch 20.000, lr 0.00100, time 2440.21 
loss 0.5037 0.2001 0.3036, ade1 1.6102, fde1 3.5928, ade 0.7662, fde 1.1754

Provided log file:

Epoch 20.000, lr 0.00100, time 872.52
loss 0.5018 0.2001 0.3016, ade1 1.5967, fde1 3.5560, ade 0.7638, fde 1.1651

@zhaone
Copy link
Author

zhaone commented Mar 1, 2021

No, I have not solved this problem yet, but your speed is not so ridiculously slow compared with mine (3 times slower than yours). Have you checked where the speed bottleneck is? for example IO?

@chenyuntc
Copy link
Collaborator

  1. Make sure you use preprocessed data. Otherwise, io and preprocessing is a heavy load.
  2. watch nvidia-smi or watch gpustat to see the gpu utilization while running code. The utilization is usually above 80%.
  3. htop to see the cpu utilization, make sure you have sufficient cpu resource.

@wuhaoran111
Copy link

@MasterIzumi i have the same question. And when i use free -h, i see that the memory are exhausted. As i have 128G memory with 4 Titan XP GPU, i think it may use too much memory in the code ?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants