Replies: 1 comment 6 replies
-
Hi @matt3o, This is great! Congratulations on this work! I think the community will benefit a lot from this.
This is strange. I've recently tested it and it is working as intended. Can you please be more specific about what's not working? With regards to your work, I'd suggest we include this model in the radiology app of MONAI Label - would you like to create a PR for this? You could include all custom transforms here temporarily until we make sure they all work. BTW, we don't need to replace DeepEdit. They both can be there. I quickly checked the monailabel code and I didn't see a trainer file. Is this only for inference? Have you tested on CT images? Please let us know. Great work!! |
Beta Was this translation helpful? Give feedback.
-
Hey guys!
We reworked the DeepEdit code within the scope of my Masters thesis and I want to ask if we shall reintegrate that code into MONAI. Sadly the DeepEdit code in MONAI tutorials is completely broken and does not work, at least the last time I was using it, so even more incentive to provide a new start into interactive segmentation.
This is the corresponding paper, which I will also present at the ISBI conference in May 2024: https://arxiv.org/pdf/2311.14482.pdf
The code can be found here: https://github.com/matt3o/AutoPET2-Submission
A few advantages: Fully GPU based distance transform (now also in MONAI > 1.3.0 itself), Sliding-Window approach for resource-reduction and higher Dice score, fixed a lot of GPU leaks on the way (see my issues about that topic) and generally quite a lot of optimizations to make the code real-time fast. This means an iteration of SW-FastEdit on a full AutoPET volume takes 5 seconds on an A100 with 80Gb of GPU.
Also, as the papers shows, we added quite some new click generation strategies and stopping criterions.
(Bonus: disabling the interactive part, this code did fine enough to let us win the third place in this years AutoPET II challenge on robustness across domains)
Long story short, I would love to integrate the code into MONAI for others to use - if that is desired.
There would be more or less three steps, that I would see as of now. However different approaches may of course be chosen as well.
monailabel
folder, which has been used for our user study.Currently the code has one major limitation, it is fully Nvidia based since I started doing a lot of transforms in cupy. I could look into transforming it for CPU based architectures as well, if that would be necessary for the integration into MONAI.
What are your thoughts on that?
Cheers,
Matthias (and @Zrrr1997)
Ps: Especially @wyli and @diazandr3s , since I was writing with you previously on this topic
Beta Was this translation helpful? Give feedback.
All reactions