-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[add] ConcatDataset.evaluate feature #1932
base: master
Are you sure you want to change the base?
Conversation
@17862923747 Hi, thank you very much for your help! Could you please sign the CLA so we can accept your contribution? This PR will be reviewed asap. |
elif len(set([type(ds) for ds in self.datasets])) != 1: | ||
raise NotImplementedError( | ||
'All the datasets should have same types') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to perform this check if separate_eval=False
is not supported?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
确实不需要,我从mmdet那拷的代码没看仔细 😄
dataset if `self.separate_eval=True`. | ||
""" | ||
|
||
new_results, res_len = self.rebuild_results(results) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please explain the reason for concatenating all elements into new_results
rather than simply utilizing results
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
results是多个val数据集混在一起的,然后每batch_size个输出组成results中一个dict,我需要根据cumulative_sizes记录的索引分割不同val数据集对应的results,分别计算各自的evaluate,我觉得先合并成一个results,然后分割比较方便,不知道我的理解对不对
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
谢谢,我疑惑的点是有没有可能直接在 clip_by_index
函数里面利用 start_idx
和 end_idx
从 results
中提取每个数据集对应的 predictions, 然后再 concat?这样子可以省去 rebuild_results
这个步骤
clip_res['preds'] = results[0]['preds'][start_idx:end_idx, :, :] | ||
clip_res['boxes'] = results[0]['boxes'][start_idx:end_idx, :] | ||
clip_res['image_paths'] = results[0]['image_paths'][start_idx:end_idx] | ||
clip_res['bbox_ids'] = results[0]['bbox_ids'][start_idx:end_idx] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bottom-up models may not generate predictions with key bbox_ids
, so a key check may be needed here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get
@17862923747 您好,请问您还有更新这个 PR 的计划吗?如果有任何困难,我们也可以帮助您一起完善这个 PR 的内容 |
Hi @17862923747 !We are grateful for your efforts in helping improve this open-source project during your personal time. Welcome to join OpenMMLab Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/UjgXkPWNqA Thank you again for your contribution❤ |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1932 +/- ##
==========================================
- Coverage 84.03% 83.80% -0.24%
==========================================
Files 241 241
Lines 20869 20928 +59
Branches 3609 3619 +10
==========================================
+ Hits 17537 17538 +1
- Misses 2406 2461 +55
- Partials 926 929 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Motivation
多数据集场景,val时会报错,因为您的ConcatDataset.evaluate没有实现
Modification
我补充了缺少的这部分代码:ConcatDataset.evaluate
BC-breaking (Optional)
可兼容已有代码和config
Use cases (Optional)
Checklist
Before PR:
After PR: