Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in dual task loss? #64

Open
wangq95 opened this issue Mar 27, 2020 · 1 comment
Open

Error in dual task loss? #64

wangq95 opened this issue Mar 27, 2020 · 1 comment

Comments

@wangq95
Copy link

wangq95 commented Mar 27, 2020

Hi, @tovacinni
Thanks for your work, I have a question about the implementation of dual task loss in loss.py. I think the dual task loss should take the estimation from classical stream N, K, H, W and the edge prediction from shape stream N, 1, H, W as inputs, but in loss.py, the dual task loss actually takes segmentation estimation and segmentation lable as inputs:
losses['dual_loss'] = self.dual_weight * self.dual_task(segin, segmask)
Is this a bug or what?

@WaterKnight1998
Copy link

Hi, @tovacinni
Thanks for your work, I have a question about the implementation of dual task loss in loss.py. I think the dual task loss should take the estimation from classical stream N, K, H, W and the edge prediction from shape stream N, 1, H, W as inputs, but in loss.py, the dual task loss actually takes segmentation estimation and segmentation lable as inputs:
losses['dual_loss'] = self.dual_weight * self.dual_task(segin, segmask)
Is this a bug or what?

The code is not working for me I tried to solved the code inside dualtaskloss as follows:

N, C, H, W = input_logits.shape
        th = 1e-8  # 1e-10
        eps = 1e-10
        ignore_mask = (gts == ignore_pixel).detach()
        input_logits = torch.where(ignore_mask.view(N, 1, H, W).expand(N, C, H, W),
                                   torch.zeros(N,C,H,W).cuda(),
                                   input_logits)
        gt_semantic_masks = gts.detach()
        gt_semantic_masks = torch.where(ignore_mask, torch.zeros(N,H,W).long().cuda(), gt_semantic_masks)
        gt_semantic_masks = _one_hot_embedding(gt_semantic_masks, 19).detach()

        g = _gumbel_softmax_sample(input_logits.view(N, C, -1), tau=0.5)
        g = g.reshape((N, C, H, W))
        g = compute_grad_mag(g, cuda=self._cuda)
 
        g_hat = compute_grad_mag(gt_semantic_masks, cuda=self._cuda)

        g = g.view(N, -1)
        g_hat = g_hat.reshape(N, -1)
        loss_ewise = F.l1_loss(g, g_hat, reduction='none', reduce=False)

        p_plus_g_mask = (g >= th).detach().float()
        loss_p_plus_g = torch.sum(loss_ewise * p_plus_g_mask) / (torch.sum(p_plus_g_mask) + eps)

        p_plus_g_hat_mask = (g_hat >= th).detach().float()
        loss_p_plus_g_hat = torch.sum(loss_ewise * p_plus_g_hat_mask) / (torch.sum(p_plus_g_hat_mask) + eps)

        total_loss = 0.5 * loss_p_plus_g + 0.5 * loss_p_plus_g_hat

However, loss_ewise = F.l1_loss(g, g_hat, reduction='none', reduce=False) this line is not working either. What are the suggestion you make to solve it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants