Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

output very weird #82

Open
Leo-o333 opened this issue Oct 26, 2021 · 5 comments
Open

output very weird #82

Leo-o333 opened this issue Oct 26, 2021 · 5 comments

Comments

@Leo-o333
Copy link

Hi everyone,
I set up the u-net on aws ec2 and its running now. However, my result looked very off:
image
and on the result the output is not the name that I annotated my images. Does any of you guys know what happened and how to resolve?
Thank you kindly!

@ThorstenFalk
Copy link
Collaborator

Modern arts... puh good question... When zooming in a little, what are these structures? Are these boxes around each pixel? Did you train the model yourself? what was the input?

@Leo-o333
Copy link
Author

Thank you for your reply and sorry for the late reply! This is an optic nerve section with a nuclear marker. I checked my output and there weren't boxes around each pixel. I trained the model based on the pre-trained detection model provided on the website. The crosses are targeting the background of the tissue instead of the actual signal, and I think I might labelled my images incorrectly. Do you have more detailed instructions on how to annotate images on imageJ and dos and don'ts? That will be extremely helpful!
Thank you so much for everything! If this can work it will save us tons of time!

@Leo-o333
Copy link
Author

@ThorstenFalk Sorry I forget to mention you in my reply!

@ThorstenFalk
Copy link
Collaborator

Ah it's a detection output, so it is probably crosses on every other background pixel. That happens if the network for whatever reason predicted a checkerboard pattern. Nevertheless, the output you get is very strange and looks more like a problem with input data. What's the element size of your images? Is it properly set in ImageJ? Did you try segmentation instead of detection? What's the output? You can also enable intermediate network outputs (i.e. heatmaps) for further debugging. If you see massive checkerboarding there, it indicates that your model is actually untrained or really badly trained.

Actually there is no specifically pre-trained detection model. However, when you use a segmentation model in a detection job it should report the centers of gravity of the segmented regions, which is pretty much what you want. However, without finetuning, results will probably be poor.

@Leo-o333
Copy link
Author

Hi Thorsten! Thank you so much for your suggestion! I will try segmentation instead. I will let you know once I tried.
Merry Christmas! @ThorstenFalk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants