Diffusion with spade for image to image translation #8084
Unanswered
OdedRotem314
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Hi @OdedRotem314, looks like you are interested in this tutorial, https://github.com/Project-MONAI/tutorials/blob/main/generation/spade_ldm/spade_ldm_brats.ipynb. Hope it helps, thanks. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all
I want to train a diffusion model (preferably not a ldm) between two image modalities (imgs1+noise ==> imgs2).
I want to use SPADE so that imgs1 are also injected to the middle layers of the network.
Images are volumetric by the way.
I trained successfully using DiffusionModelUNet but I want to see if SPADEDiffusionModelUNet can improve details.
Can anyone write down the main code:
The below code did not seem to train anything.
noise = torch.randn_like(images2).to(device)
timesteps = torch.randint(0, 1000, (len(images1),)).to(device)
noisy_seg = scheduler.add_noise(original_samples=images2, noise=noise, timesteps=timesteps)
combined = torch.cat((noisy_seg, images1), dim=1).to(device)
model = partial(model, seg=images1)
noise_prediction = model(x=combined, timesteps=timesteps)
Thanks
Oded
Beta Was this translation helpful? Give feedback.
All reactions