Reproducing PSGTR and PSGFormer #51
Replies: 3 comments 7 replies
-
I'm still confused about it... It seems that both
only affects the evaluation process rather than the training process. I cannot see how these explain the difficulty of reproducing the reported results. |
Beta Was this translation helpful? Give feedback.
-
Another question: the segmentation loss described in the paper is OpenPSG/configs/_base_/models/psgtr_r50.py Lines 52 to 67 in a0e5c8f OpenPSG/openpsg/models/relation_heads/psgtr_head.py Lines 410 to 440 in a0e5c8f |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
If we want to reproduce the results of the paper, we need to change
OpenPSG/openpsg/models/frameworks/psgtr.py
Line 16 in a0e5c8f
The difference between the evaluation of paper and the competition is that:
in the paper, the mask of each triplet can be generated independently, but the competition needs to submit a panseg png file, so that the mask of the triplet comes from a unified panseg, so we set
eval_pan_rels=True
.We have provided the log file in this issue #33 for your reference.
We are using the release psgtr code to run the results of the psgtr in the paper. We have compared the logs that other students have not reproduced before, probably because the segmentation loss of psgtr has not converged, it may be recommended to add the focal loss of seg, although we did not have it to achieve the results of the paper.
The r50 and 100 of psgtr are much higher than the corresponding values of R@20.
We think that in the case of
eval_pan_rels=False
, psgtr has a higher degree of freedom and is not limited by the effect of the panoptic segmentation.For example, the advantage of psgtr is that a mask can be tagged by many categories in different triplets, which makes it easier to guess correctly. But this advantage is weakened in the competition based on a unified panseg map after fusion.
Beta Was this translation helpful? Give feedback.
All reactions