You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I created a simple config for rtmdet and log my results to wandb. It seams that training metrics like loss get logged every step and not per epoch. Similairly, my validation results get logged per epoch but the x-axis on wandb still shows "Step". (Note I use the newest version of MMDetection).
My config is below:
########## My Addtitions Start ##########
#Modified by ConfigWriter to enable wandb logging
experiment_name = "default"
condition_name = "default"
dataset_name = "default"
model_name = "rtmdet-tiny"
fold = 1
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I created a simple config for rtmdet and log my results to wandb. It seams that training metrics like loss get logged every step and not per epoch. Similairly, my validation results get logged per epoch but the x-axis on wandb still shows "Step". (Note I use the newest version of MMDetection).
My config is below:
########## My Addtitions Start ##########
#Modified by ConfigWriter to enable wandb logging
experiment_name = "default"
condition_name = "default"
dataset_name = "default"
model_name = "rtmdet-tiny"
fold = 1
#Automatically determined to enable wandb logging
group_name = f"{model_name}{dataset_name}{condition_name}"
tags = [condition_name, dataset_name, model_name, fold]
#Modified by ConfigWriter to enable loading checkpoints
load_from = None
########## My Additions End ##########
Inherit and overwrite part of the config based on this config
base = '../mmdetection/configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py'
data_root = 'data/' # dataset root
batch_size_per_gpu = 32
num_workers = 8
max_epochs = 2
base_lr = 0.00008
metainfo = {
'classes': ("Bus", "Car", "Lamp", "Motorcycle", "People", "Truck"),
'palette': [
(220, 20, 60), # Vibrant Red
(210, 180, 140), # Warm Beige/Tan
(192, 192, 192), # Classic Silver/Grey
(0, 191, 255), # Bright Blue
(255, 223, 0), # Soft Yellow/Light Orange
(34, 139, 34) # Forest Green
]
}
train_dataloader = dict(
batch_size=batch_size_per_gpu,
num_workers=num_workers,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='train_images/'),
ann_file='train.json'))
val_dataloader = dict(
batch_size=batch_size_per_gpu,
num_workers=num_workers,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='validation_images/'),
ann_file='valid.json'))
test_dataloader = val_dataloader
val_evaluator = dict(ann_file=data_root + 'valid.json')
test_evaluator = val_evaluator
model = dict(bbox_head=dict(num_classes=6))
train_pipeline = [
dict(type='LoadImageFromFile', backend_args={{base.backend_args}}),
dict(type='Resize', scale=(640, 640), keep_ratio=True),
dict(type='Pad', size=(640, 640), pad_val=dict(img=(114, 114, 114))),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='PackDetInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile', backend_args={{base.backend_args}}),
dict(type='Resize', scale=(640, 640), keep_ratio=True),
dict(type='Pad', size=(640, 640), pad_val=dict(img=(114, 114, 114))),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='PackDetInputs',
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
'scale_factor'))
]
learning rate
param_scheduler = [
dict(
type='LinearLR',
start_factor=1.0e-5,
by_epoch=False,
begin=0,
end=10),
dict(
type='CosineAnnealingLR',
eta_min=base_lr * 0.05,
begin=max_epochs // 2,
end=max_epochs,
T_max=max_epochs // 2,
by_epoch=True,
convert_to_iter_based=True),
]
optimizer
optim_wrapper = dict(
delete=True,
type='OptimWrapper',
optimizer=dict(type='AdamW', lr=base_lr, weight_decay=0.05),
paramwise_cfg=dict(
norm_decay_mult=0, bias_decay_mult=0, bypass_duplicate=True))
default_hooks = dict(
checkpoint=dict(
interval=1,
max_keep_ckpts=1,
save_best='auto'
),
logger=dict(type='LoggerHook', interval=1))
custom_hooks = [
]
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=max_epochs, val_interval=1)
vis_backends = [
dict(type='LocalVisBackend'),
dict(type='WandbVisBackend',
init_kwargs={
'project': experiment_name,
"entity": "mm-gerster",
"name": f"fold{fold}",
"group": group_name,
"tags": [condition_name, dataset_name, model_name, f"fold_{fold}"],
"reinit": True,
})
]
visualizer = dict(
type='DetLocalVisualizer',
vis_backends=vis_backends,
name='visualizer')
Beta Was this translation helpful? Give feedback.
All reactions