Skip to content

Commit

Permalink
Added param loss_ota for hyp.yaml, to disable OTA for faster training
Browse files Browse the repository at this point in the history
  • Loading branch information
AlexeyAB committed Aug 9, 2022
1 parent 469a4d0 commit 711a16b
Show file tree
Hide file tree
Showing 5 changed files with 12 additions and 5 deletions.
3 changes: 2 additions & 1 deletion data/hyp.scratch.custom.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,5 @@ fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # image copy paste (probability)
paste_in: 0.0 # image copy paste (probability)
paste_in: 0.0 # image copy paste (probability), use 0 for faster training
loss_ota: 1 # use ComputeLossOTA, use 0 for faster training
3 changes: 2 additions & 1 deletion data/hyp.scratch.p5.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,5 @@ fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.15 # image mixup (probability)
copy_paste: 0.0 # image copy paste (probability)
paste_in: 0.15 # image copy paste (probability)
paste_in: 0.15 # image copy paste (probability), use 0 for faster training
loss_ota: 1 # use ComputeLossOTA, use 0 for faster training
3 changes: 2 additions & 1 deletion data/hyp.scratch.p6.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,5 @@ fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.15 # image mixup (probability)
copy_paste: 0.0 # image copy paste (probability)
paste_in: 0.15 # image copy paste (probability)
paste_in: 0.15 # image copy paste (probability), use 0 for faster training
loss_ota: 1 # use ComputeLossOTA, use 0 for faster training
3 changes: 2 additions & 1 deletion data/hyp.scratch.tiny.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,5 @@ fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.05 # image mixup (probability)
copy_paste: 0.0 # image copy paste (probability)
paste_in: 0.05 # image copy paste (probability)
paste_in: 0.05 # image copy paste (probability), use 0 for faster training
loss_ota: 1 # use ComputeLossOTA, use 0 for faster training
5 changes: 4 additions & 1 deletion train.py
Original file line number Diff line number Diff line change
Expand Up @@ -359,7 +359,10 @@ def train(hyp, opt, device, tb_writer=None):
# Forward
with amp.autocast(enabled=cuda):
pred = model(imgs) # forward
loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size
if hyp['loss_ota'] == 1:
loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size
else:
loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
if rank != -1:
loss *= opt.world_size # gradient averaged between devices in DDP mode
if opt.quad:
Expand Down

0 comments on commit 711a16b

Please sign in to comment.