Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3D U-net segmentation #42

Open
weng-joy opened this issue Jun 14, 2019 · 5 comments
Open

3D U-net segmentation #42

weng-joy opened this issue Jun 14, 2019 · 5 comments
Assignees
Labels
active This ticket has pending action help wanted Extra attention is needed

Comments

@weng-joy
Copy link

weng-joy commented Jun 14, 2019

Hi,
Thanks for the cool tool! For my dataset, I got very nice 2D segmentation based on Unet plugin, now I am using the U-net Plugin to do the 3D segmentation. But it always failed.

Q1: No matter how I set the parameters and the annotations, I never got IoU and F1 curve.

Untitled-github0

And I used your example pre-trained 3D model "3d_cell_net_v1_models", finetuned on it. The result still looked strange.
Untitled-github3

My 3D data is anisotropic. The input patch size is 802 * 802 * 100 (1 channel), resolution(x,y) is 1.02μm/px, and resolution(z) is 5μm/px.
As the supplementary information of the paper on Nature Methods mentioned, I annotated on selected slices in the stack (Once I tested the Roi set from 1st, 25th, 49th, 75th and 99th slice, the others are ignore. I also tested the Roi set just for the top, the central, the bottom and others ignore...whatever I did the annotations, the results were always unsatisfied as I showed at beginning.) For sure, the setting of Roi names and class name also followed as the paper.
In "create a new model" and "finetune", I used the similar parameters as I used in 2D segmentation, which the values had good performance. Just for test, 2 stacks for train, and 1 stack for validation.

how should I correctly build and train my 3D model?

Q2: The usage of "Extract Mask Annotations". Could you give me a clue to use it?

Thanks a lot in advance!

@ThorstenFalk
Copy link
Collaborator

A sample image (at best an orthoview) might help. From the curves I can only say that the training loss oscillates a lot. That indicates quite diverse and complicated training data. The anisotropy factor of 5 is quite high, but this is not necessarily a problem, it only means that the model might not be ideal for these data. Usually I perform only 2D convolutions/pooling until resolution is isotropic and then continue in 3D. For your data this means that you would perform 2D operations at two resolution levels, our models usually only do this for the coarsest resolution.

Only IoU is of relevance in this mode of training. The F1 measures rely on fully annotated objects, therefore I would ignore them. And an IoU of 0.6 is already better than nothing. How do the segmentations look like?

@weng-joy
Copy link
Author

weng-joy commented Jun 14, 2019

Thanks for quick replay.

My data looks like this:
Untitled-4
and the glomeruli are my aimed objects.
Untitled-5

The segmentation output from the your pre-trained model "IoU of 0.6" was till not good.
You are right, that "the model might not be ideal for these data. "
Therefore I want to "U-net->utilities ->create a new model" to build my own model. The problems came... I never got the IoU, and the segmentation was terrible. That's why I need help. According to your experience on 3D U-net, how should I solve the problems.

By the way, for anisotropy, I've also tried to firstly converte to isotropic stack based on CSBDeep (CARE, http://csbdeep.bioimagecomputing.com/), then applied U-net. No any improvments.

Thanks once more!!

@ThorstenFalk
Copy link
Collaborator

When training 3D segmentation models from scratch, my experience is that you can expect first non-empty segmentations after approximately 10,000 iterations, then they rapidly get better until you get a quite stable model after around 30,000 iterations. It of course depends on data and annotations, but your problem looks solvable and your annotations clearly indicate the expected outcome. Your glomeruli are embedded in tissue. This is something the model needs to learn because it was only trained for isolated cells on dark background.

What's the main problem with the outputs? Insufficient separation, inaccurate boundaries, both?

@ThorstenFalk ThorstenFalk self-assigned this Jun 18, 2019
@ThorstenFalk ThorstenFalk added active This ticket has pending action help wanted Extra attention is needed labels Jun 18, 2019
@weng-joy
Copy link
Author

Thanks for the nice answer!
The problems with the output are both, insufficient separation and inaccurate boundaries.

To distinguish the aimed embedded glomeruli, in creating my own new 2D Unet model, I increased the Background/Foreground ration. and also adjusted other parameters, with 20,000 iteraions. I got very nice results. Originally I wanted to build my own 3D Unet model. Since I always failed, I decided to finetune on your example pre-trained model.

Actually, I have already tesed it many times using different input stacks. Usually after 3000 iterations, even the curves of F1 and IoU fluctuating, but they were tent to be constant. No trends would be going increase.

I also wonder, how long time did the training spend approximately in the experiments with the "input patch size 236 * 236 * 100" mentioned in the paper, if choose 10,000 iterations? For my case, just use the stack "802 * 802 * 20" do 3D Unet training with 5000 iterations, it needs 7days, based on GPU Nvidia Geforce RTX 2080 TI.

Thanks for your attention, and I am looking forward to your help!

@ThorstenFalk
Copy link
Collaborator

Hmz, 5000 iterations in 7 days is too few. How often do you evaluate on the validation set? For 3D models I evaluate only every 1000 iterations, because the evaluation is a serious overhead. Without validation you should reach at least 20k iterations a day, so with validation every 500 iterations maybe 18k a day.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
active This ticket has pending action help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants