-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I train a gesture recognition model with a dynamic dataset? #213
Comments
Hello, have you trained gesture recognition with stgcn++? Can you share it? |
I just train simply, the config file content is below: graph = 'handmp' model = dict( dataset_type = 'GestureDataset' data = dict( optimizeroptimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0005, nesterov=True) learning policylr_config = dict(policy='CosineAnnealing', min_lr=0, by_epoch=False) runtime settingslog_level = 'INFO' then just use the script bash file to train |
Hello, thank you for your reply. There is one more question, I have learned that STGCN network classifs behaviors for consecutive frames. Did you add tracking network? I didn't see it in the configuration file.
…------------------ 原始邮件 ------------------
发件人: "kennymckormick/pyskl" ***@***.***>;
发送时间: 2024年1月16日(星期二) 下午2:55
***@***.***>;
***@***.******@***.***>;
主题: Re: [kennymckormick/pyskl] How can I train a gesture recognition model with a dynamic dataset? (Issue #213)
Hello, have you trained gesture recognition with stgcn++? Can you share it?
I just train simply, the config file content is below:
graph = 'handmp'
modality = 'j'
model = dict(
type='RecognizerGCN',
backbone=dict(
type='STGCN',
in_channels=2,
gcn_adaptive='init',
gcn_with_res=True,
tcn_type='mstcn',
num_stages=6,
down_stages=[6],
inflate_stages=[6],
graph_cfg=dict(layout=graph, mode='spatial')),
cls_head=dict(type='GCNHead', num_classes=2, in_channels=128))
dataset_type = 'GestureDataset'
ann_file = 'data/labdata/smoke_output_final.pkl'
load_from = 'demo/hagrid.pth'
train_pipeline = [
dict(type='PreNormalize2D', threshold=0, mode='auto'),
dict(type='GenSkeFeat', dataset=graph, feats=[modality]),
dict(type='UniformSample', clip_len=10, num_clips=1),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=1),
dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['keypoint'])
]
val_pipeline = [
dict(type='PreNormalize2D', threshold=0, mode='auto'),
dict(type='GenSkeFeat', dataset=graph, feats=[modality]),
dict(type='UniformSample', clip_len=10, num_clips=1),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=1),
dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['keypoint'])
]
test_pipeline = [
dict(type='PreNormalize2D', threshold=0, mode='auto'),
dict(type='GenSkeFeat', dataset=graph, feats=[modality]),
dict(type='UniformSample', clip_len=10, num_clips=1),
dict(type='PoseDecode'),
dict(type='FormatGCNInput', num_person=1),
dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['keypoint'])
]
data = dict(
videos_per_gpu=16,
workers_per_gpu=2,
test_dataloader=dict(videos_per_gpu=1),
train=dict(
type='RepeatDataset',
times=5,
dataset=dict(type=dataset_type, ann_file=ann_file, pipeline=train_pipeline, split='labdata_train')),
val=dict(type=dataset_type, ann_file=ann_file, pipeline=val_pipeline, split='labdata_val'),
test=dict(type=dataset_type, ann_file=ann_file, pipeline=test_pipeline, split='labdata_val'))
optimizer
optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0005, nesterov=True)
optimizer_config = dict(grad_clip=None)
learning policy
lr_config = dict(policy='CosineAnnealing', min_lr=0, by_epoch=False)
total_epochs = 25
checkpoint_config = dict(interval=1)
evaluation = dict(interval=1, metrics=['top_k_accuracy'])
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])
runtime settings
log_level = 'INFO'
work_dir = './work_dirs/stgcn++/stgcn++_labdata_hrnet/hand_j'
then just use the script bash file to train
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Sorry, I didn't. Maybe you need to try it on your own. |
OK,thank you!
…------------------ 原始邮件 ------------------
发件人: "kennymckormick/pyskl" ***@***.***>;
发送时间: 2024年1月17日(星期三) 下午4:58
***@***.***>;
***@***.******@***.***>;
主题: Re: [kennymckormick/pyskl] How can I train a gesture recognition model with a dynamic dataset? (Issue #213)
Hello, thank you for your reply. There is one more question, I have learned that STGCN network classifs behaviors for consecutive frames. Did you add tracking network? I didn't see it in the configuration file.
…
------------------ 原始邮件 ------------------ 发件人: "kennymckormick/pyskl" @.>; 发送时间: 2024年1月16日(星期二) 下午2:55 @.>; @.@.>; 主题: Re: [kennymckormick/pyskl] How can I train a gesture recognition model with a dynamic dataset? (Issue #213) Hello, have you trained gesture recognition with stgcn++? Can you share it? I just train simply, the config file content is below: graph = 'handmp' modality = 'j' model = dict( type='RecognizerGCN', backbone=dict( type='STGCN', in_channels=2, gcn_adaptive='init', gcn_with_res=True, tcn_type='mstcn', num_stages=6, down_stages=[6], inflate_stages=[6], graph_cfg=dict(layout=graph, mode='spatial')), cls_head=dict(type='GCNHead', num_classes=2, in_channels=128)) dataset_type = 'GestureDataset' ann_file = 'data/labdata/smoke_output_final.pkl' load_from = 'demo/hagrid.pth' train_pipeline = [ dict(type='PreNormalize2D', threshold=0, mode='auto'), dict(type='GenSkeFeat', dataset=graph, feats=[modality]), dict(type='UniformSample', clip_len=10, num_clips=1), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=1), dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['keypoint']) ] val_pipeline = [ dict(type='PreNormalize2D', threshold=0, mode='auto'), dict(type='GenSkeFeat', dataset=graph, feats=[modality]), dict(type='UniformSample', clip_len=10, num_clips=1), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=1), dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['keypoint']) ] test_pipeline = [ dict(type='PreNormalize2D', threshold=0, mode='auto'), dict(type='GenSkeFeat', dataset=graph, feats=[modality]), dict(type='UniformSample', clip_len=10, num_clips=1), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=1), dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['keypoint']) ] data = dict( videos_per_gpu=16, workers_per_gpu=2, test_dataloader=dict(videos_per_gpu=1), train=dict( type='RepeatDataset', times=5, dataset=dict(type=dataset_type, ann_file=ann_file, pipeline=train_pipeline, split='labdata_train')), val=dict(type=dataset_type, ann_file=ann_file, pipeline=val_pipeline, split='labdata_val'), test=dict(type=dataset_type, ann_file=ann_file, pipeline=test_pipeline, split='labdata_val')) optimizer optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0005, nesterov=True) optimizer_config = dict(grad_clip=None) learning policy lr_config = dict(policy='CosineAnnealing', min_lr=0, by_epoch=False) total_epochs = 25 checkpoint_config = dict(interval=1) evaluation = dict(interval=1, metrics=['top_k_accuracy']) log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')]) runtime settings log_level = 'INFO' work_dir = './work_dirs/stgcn++/stgcn++_labdata_hrnet/hand_j' then just use the script bash file to train — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Sorry, I didn't. Maybe you need to try it on your own.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Hello, can you share the 40 categories of gesture recognition and corresponding gesture images, thank you?
…------------------ 原始邮件 ------------------
发件人: "kennymckormick/pyskl" ***@***.***>;
发送时间: 2024年1月17日(星期三) 下午4:58
***@***.***>;
***@***.******@***.***>;
主题: Re: [kennymckormick/pyskl] How can I train a gesture recognition model with a dynamic dataset? (Issue #213)
Hello, thank you for your reply. There is one more question, I have learned that STGCN network classifs behaviors for consecutive frames. Did you add tracking network? I didn't see it in the configuration file.
…
------------------ 原始邮件 ------------------ 发件人: "kennymckormick/pyskl" @.>; 发送时间: 2024年1月16日(星期二) 下午2:55 @.>; @.@.>; 主题: Re: [kennymckormick/pyskl] How can I train a gesture recognition model with a dynamic dataset? (Issue #213) Hello, have you trained gesture recognition with stgcn++? Can you share it? I just train simply, the config file content is below: graph = 'handmp' modality = 'j' model = dict( type='RecognizerGCN', backbone=dict( type='STGCN', in_channels=2, gcn_adaptive='init', gcn_with_res=True, tcn_type='mstcn', num_stages=6, down_stages=[6], inflate_stages=[6], graph_cfg=dict(layout=graph, mode='spatial')), cls_head=dict(type='GCNHead', num_classes=2, in_channels=128)) dataset_type = 'GestureDataset' ann_file = 'data/labdata/smoke_output_final.pkl' load_from = 'demo/hagrid.pth' train_pipeline = [ dict(type='PreNormalize2D', threshold=0, mode='auto'), dict(type='GenSkeFeat', dataset=graph, feats=[modality]), dict(type='UniformSample', clip_len=10, num_clips=1), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=1), dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['keypoint']) ] val_pipeline = [ dict(type='PreNormalize2D', threshold=0, mode='auto'), dict(type='GenSkeFeat', dataset=graph, feats=[modality]), dict(type='UniformSample', clip_len=10, num_clips=1), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=1), dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['keypoint']) ] test_pipeline = [ dict(type='PreNormalize2D', threshold=0, mode='auto'), dict(type='GenSkeFeat', dataset=graph, feats=[modality]), dict(type='UniformSample', clip_len=10, num_clips=1), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=1), dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['keypoint']) ] data = dict( videos_per_gpu=16, workers_per_gpu=2, test_dataloader=dict(videos_per_gpu=1), train=dict( type='RepeatDataset', times=5, dataset=dict(type=dataset_type, ann_file=ann_file, pipeline=train_pipeline, split='labdata_train')), val=dict(type=dataset_type, ann_file=ann_file, pipeline=val_pipeline, split='labdata_val'), test=dict(type=dataset_type, ann_file=ann_file, pipeline=test_pipeline, split='labdata_val')) optimizer optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0005, nesterov=True) optimizer_config = dict(grad_clip=None) learning policy lr_config = dict(policy='CosineAnnealing', min_lr=0, by_epoch=False) total_epochs = 25 checkpoint_config = dict(interval=1) evaluation = dict(interval=1, metrics=['top_k_accuracy']) log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')]) runtime settings log_level = 'INFO' work_dir = './work_dirs/stgcn++/stgcn++_labdata_hrnet/hand_j' then just use the script bash file to train — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Sorry, I didn't. Maybe you need to try it on your own.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Sorry, I use my own dataset which has only 2 categories, so I can't provide the images |
OK, thank you.
…------------------ 原始邮件 ------------------
发件人: "kennymckormick/pyskl" ***@***.***>;
发送时间: 2024年1月19日(星期五) 中午11:12
***@***.***>;
***@***.******@***.***>;
主题: Re: [kennymckormick/pyskl] How can I train a gesture recognition model with a dynamic dataset? (Issue #213)
Hello, can you share the 40 categories of gesture recognition and corresponding gesture images, thank you?
…
------------------ 原始邮件 ------------------ 发件人: "kennymckormick/pyskl" @.>; 发送时间: 2024年1月17日(星期三) 下午4:58 @.>; @.@.>; 主题: Re: [kennymckormick/pyskl] How can I train a gesture recognition model with a dynamic dataset? (Issue #213) Hello, thank you for your reply. There is one more question, I have learned that STGCN network classifs behaviors for consecutive frames. Did you add tracking network? I didn't see it in the configuration file. … ------------------ 原始邮件 ------------------ 发件人: "kennymckormick/pyskl" @.>; 发送时间: 2024年1月16日(星期二) 下午2:55 @.>; @.@.>; 主题: Re: [kennymckormick/pyskl] How can I train a gesture recognition model with a dynamic dataset? (Issue #213) Hello, have you trained gesture recognition with stgcn++? Can you share it? I just train simply, the config file content is below: graph = 'handmp' modality = 'j' model = dict( type='RecognizerGCN', backbone=dict( type='STGCN', in_channels=2, gcn_adaptive='init', gcn_with_res=True, tcn_type='mstcn', num_stages=6, down_stages=[6], inflate_stages=[6], graph_cfg=dict(layout=graph, mode='spatial')), cls_head=dict(type='GCNHead', num_classes=2, in_channels=128)) dataset_type = 'GestureDataset' ann_file = 'data/labdata/smoke_output_final.pkl' load_from = 'demo/hagrid.pth' train_pipeline = [ dict(type='PreNormalize2D', threshold=0, mode='auto'), dict(type='GenSkeFeat', dataset=graph, feats=[modality]), dict(type='UniformSample', clip_len=10, num_clips=1), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=1), dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['keypoint']) ] val_pipeline = [ dict(type='PreNormalize2D', threshold=0, mode='auto'), dict(type='GenSkeFeat', dataset=graph, feats=[modality]), dict(type='UniformSample', clip_len=10, num_clips=1), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=1), dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['keypoint']) ] test_pipeline = [ dict(type='PreNormalize2D', threshold=0, mode='auto'), dict(type='GenSkeFeat', dataset=graph, feats=[modality]), dict(type='UniformSample', clip_len=10, num_clips=1), dict(type='PoseDecode'), dict(type='FormatGCNInput', num_person=1), dict(type='Collect', keys=['keypoint', 'label'], meta_keys=[]), dict(type='ToTensor', keys=['keypoint']) ] data = dict( videos_per_gpu=16, workers_per_gpu=2, test_dataloader=dict(videos_per_gpu=1), train=dict( type='RepeatDataset', times=5, dataset=dict(type=dataset_type, ann_file=ann_file, pipeline=train_pipeline, split='labdata_train')), val=dict(type=dataset_type, ann_file=ann_file, pipeline=val_pipeline, split='labdata_val'), test=dict(type=dataset_type, ann_file=ann_file, pipeline=test_pipeline, split='labdata_val')) optimizer optimizer = dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0005, nesterov=True) optimizer_config = dict(grad_clip=None) learning policy lr_config = dict(policy='CosineAnnealing', min_lr=0, by_epoch=False) total_epochs = 25 checkpoint_config = dict(interval=1) evaluation = dict(interval=1, metrics=['top_k_accuracy']) log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')]) runtime settings log_level = 'INFO' work_dir = './work_dirs/stgcn++/stgcn++_labdata_hrnet/hand_j' then just use the script bash file to train — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.> Sorry, I didn't. Maybe you need to try it on your own. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.>
Sorry, I use my own dataset which has only 2 categories, so I can't provide the images
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
I want to use stgcn++, so I mean which config file I can use? Or I need to create my own config file?
The text was updated successfully, but these errors were encountered: