Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confuse about the my_layer_norm function and GAN loss function #2

Open
SeeU1119 opened this issue Apr 7, 2021 · 4 comments
Open

Confuse about the my_layer_norm function and GAN loss function #2

SeeU1119 opened this issue Apr 7, 2021 · 4 comments

Comments

@SeeU1119
Copy link

SeeU1119 commented Apr 7, 2021

Hi, thank you for your excellent work, I get a good result when run the demo. But when I read the source code, I get some problems about GAN loss function.
image
In the paper, the dis loss about fake_img should be self.loss_fn(d_fake, gauss(1 - mask)), but I find you just do gauss(mask), Is there something wrong with my understanding?
whatmore, the dis loss about real_img should be self.loss_fn(d_real, d_real_label), where d_real_label is torch.ones(...), but you write it to torch.zeros(...).
By the way, could you explain the work of my_layer_norm in the AOT block?
Thanks.

@zjlinkin
Copy link

zjlinkin commented Apr 9, 2021

I have the same confuse about the gan loss part... the gan loss in the code seem not do adversarial train

@ewrfcas
Copy link

ewrfcas commented Jun 25, 2021

The same question

@964728623
Copy link

g_fake_label = torch.ones_like(g_fake).cuda() is wrong for gan, g_fake_label = torch.zeros_like(g_fake).cuda() is right. The author handle the wrong with parser.add_argument('--adv_weight', type=float, default=0.01,help='loss weight for adversarial loss'),no use for adv loss for training for netG.

@964728623
Copy link

    d_fake_label = gaussian_blur(masks, (self.ksize, self.ksize), (10, 10)).detach().cuda()
    d_real_label = torch.zeros_like(d_real).cuda()
    # g_fake_label = torch.ones_like(g_fake).cuda()
    g_fake_label = torch.zeros_like(g_fake).cuda()

    dis_loss = self.loss_fn(d_fake[masks>0.5], d_fake_label[masks>0.5]) + self.loss_fn(d_real, d_real_label)
    gen_loss = self.loss_fn(g_fake[masks>0.5], g_fake_label[masks>0.5]) 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants