Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about the detail of ESB. #6

Open
wjjlisa opened this issue Aug 25, 2021 · 3 comments
Open

Questions about the detail of ESB. #6

wjjlisa opened this issue Aug 25, 2021 · 3 comments

Comments

@wjjlisa
Copy link

wjjlisa commented Aug 25, 2021

Hi,
Thanks for your great work. I read your paper and released code these days, and I have questions about the Edge-Supervised Branch.
In your paper, the predicted manipulation edge map, denoted as {Gedge(xi)}, obtained by transforming the output of the last ERB with a sigmoid layer.
But in your code, I didn't see the sigmoid layer, only the last ERB as the output.
if self.sobel:
res1 = self.erb_db_1(run_sobel(self.sobel_x1, self.sobel_y1, c1))
res1 = self.erb_trans_1(res1 + self.upsample(self.erb_db_2(run_sobel(self.sobel_x2, self.sobel_y2, c2))))
res1 = self.erb_trans_2(res1 + self.upsample_4(self.erb_db_3(run_sobel(self.sobel_x3, self.sobel_y3, c3))))
res1 = self.erb_trans_3(res1 + self.upsample_4(self.erb_db_4(run_sobel(self.sobel_x4, self.sobel_y4, c4))), relu=False)

    else:
        res1 = self.erb_db_1(c1)
        res1 = self.erb_trans_1(res1 + self.upsample(self.erb_db_2(c2)))
        res1 = self.erb_trans_2(res1 + self.upsample_4(self.erb_db_3(c3)))
        res1 = self.erb_trans_3(res1 + self.upsample_4(self.erb_db_4(c4)), relu=False)

    if self.constrain:
        x = rgb2gray(x)
        x = self.constrain_conv(x)
        constrain_features, _ = self.noise_extractor.base_forward(x)
        constrain_feature = constrain_features[-1]
        c4 = torch.cat([c4, constrain_feature], dim=1)

    outputs = []

    x = self.head(c4)
    x0 = F.interpolate(x[0], size, mode='bilinear', align_corners=True)
    outputs.append(x0)

    if self.aux:
        x1 = F.interpolate(x[1], size, mode='bilinear', align_corners=True)
        x2 = F.interpolate(x[2], size, mode='bilinear', align_corners=True)
        outputs.append(x1)
        outputs.append(x2)

    return res1, x0 

I think 'res1' is the output, with no sigmoid layer.
Did I miss something? Could you help me? Thank you very much.
Best regards.

@dong03
Copy link
Owner

dong03 commented Aug 25, 2021

Indeed it is.

We tried to use Cross-Entropy-like losses( BCE, Focal, ...) in early experiments, and sigmoid is normally contained in the implement of loss function. So there are no last sigmoids in the model, and such implementation is kept for convenience when loss_func changes.

As the edge branch is used for training supervision but not refinement on output, its' sigmoid doesn't appear in the testing part. But outputs' still can be found here.

seg = torch.sigmoid(seg).detach().cpu()

Best regards.

@wjjlisa
Copy link
Author

wjjlisa commented Aug 26, 2021

Thanks. But I have another question,
, seg = run_model(model, img)
seg = torch.sigmoid(seg).detach().cpu()
I think the first returned value '
' is {Gedge(xi)} , the second returned value 'seg' is the pred mask. so the first returned value should be put into sigmoid layer, that is to say, torch.sigmoid(_).detach().cpu().
It's really confused me.

Best regards.

@dong03
Copy link
Owner

dong03 commented Aug 26, 2021

Indeed the first "_" is predicted edge, but edge map is no longer needed during inference (as I said, just for supervision during training), so we use variable name "_", indicating it's just a placeholder.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants