Replies: 2 comments 2 replies
-
Hi As a first debugging step I would downsample your images and lesions to 64x64 (you can do this by adding a resampling transform to your list, make sure you use binary interp for the labels) and see if that works better. If you find that works better I'd suggest trying a DIffusionModelUnet with more levels to see if that makes a difference |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi community,
I am new to MONAI GenerativeModels (and diffusion models, in general) and recently started using this tutorial on anomaly detection in 2D slices via implicit guidance (via slice-level labels). I adapted this code for my use-case, i.e., PET images from lung cancer patients. I pre-saved the axial PET slices containing lesions and slices not containing lesions as 2D nifti files and load the data from those files. Since, in my dataset, the
number of lesion slices >> number of non-lesion slices
, I took all the lesion slices from an image, and randomly sampled an equal number of non-lesion slices, just to create a balanced dataset for my initial experiments. Mytransform
variable (which I used for both train and validation images) looks like this, where 'PT' is the key corresponding to PET slice, and 'GT' for the ground truth lesion mask slice:My input image slice is much bigger (200 x 200) as compared to the size used in the tutorial above (64x64). Although I use the exact same model as given in the tutorial. The lesions on my slices are also comparatively smaller in size than the lesions in the MRI dataset used in the tutorial.
I tried running the training with all the same hyperparameters as in the tutorial and then plotted the original image, latent image, reconstructed image (which is the healthy counterpart to the original image that is being generated by the model), anomaly map (original image - reconstructed image), and the original ground truth lesion mask for the slice for all the slices containing lesions within my validation set. I think that for most of the slices (see images below), the original and reconstructed images look almost the same, so the anomaly map has almost no bright spot at the location where the lesion is supposed to be.
Can someone help me with what might be going wrong? Should my image size be smaller and cropped around the lesions? Should I downsample my image so I don't have to use 200x200 sized images, but rather use something smaller like 64x64? Moreover, which hyperparameters is this training most sensitive to (meaning, which ones should I started playing with)?
Any feedback is highly appreciated. Thanks in advance.
@SANCHES-Pedro
Beta Was this translation helpful? Give feedback.
All reactions