Skip to content
This repository has been archived by the owner on Apr 4, 2024. It is now read-only.

Something about loss function. #11

Open
kerryhhh opened this issue May 18, 2022 · 2 comments
Open

Something about loss function. #11

kerryhhh opened this issue May 18, 2022 · 2 comments

Comments

@kerryhhh
Copy link

Thank you for sharing your code! But I find out that the hyperparameter of loss function(lamda_reconstruction and lamda_low_frequency) in your code is different from the paper, which one I should use?

@TomTomTommi
Copy link
Owner

Hi, thanks for your interest. The hyperparameter depends on the tradeoff between the quality of concealing and recovering. In the first stage, all hyperparameters can be set to 1 until the network converges. Then, you could finetune the network with different lambda according to the performance. For example, if you prefer a higher reconstruction quality, then set lamda_reconstruction higher.

@fkeufss
Copy link

fkeufss commented Jul 30, 2022

Thank you for sharing your code. I am trying your code and I do find the loss explosion problem. Do you know the inherent reason of it? Is there any better solution instead of restarting training with lower learning rate every time manually?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants