Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The loss of the encoder and decoder is very high #7

Open
zhaotianzi opened this issue Sep 22, 2021 · 3 comments
Open

The loss of the encoder and decoder is very high #7

zhaotianzi opened this issue Sep 22, 2021 · 3 comments

Comments

@zhaotianzi
Copy link

i have use the model in my own dataset ,But after a long time of training, the loss is still very high.Can you tell me how to reduce my loss?

DEBUG:root:critic x loss -30.026 critic z loss 0.416
encoder loss 1265.846 decoder loss 1235.256

@arunppsg
Copy link
Owner

arunppsg commented Sep 24, 2021

Two pointers:

  • After training, the encoder and decoder loss should start reducing because the generator starts to learn a good mapping.
  • The loss for critic should be increasing in magnitude. The reason is that a high critic loss implies that the critic (discriminator) is able to distinguish between fake samples and real samples really well. As it learns, the critic loss must increase.

@jakcic
Copy link

jakcic commented Dec 14, 2021

Two pointers:

  • After training, the encoder and decoder loss should start reducing because the generator starts to learn a good mapping.
  • The loss for critic should be increasing in magnitude. The reason is that a high critic loss implies that the critic (discriminator) is able to distinguish between fake samples and real samples really well. As it learns, the critic loss must increase.

Hi Arun,
The experiment results are just the opposite in my own datasets. After training, the encoder and decoder loss increased, and the critic loss reduced.
At the same time, the model performed badly when i used the model to perform anomaly detection in anomaly datasets.

@arunppsg
Copy link
Owner

Encoder and decoder loss should decrease as they learn a better mapping - I observed it in training log file. I am not sure what is wrong in your case, maybe you are seeing encoder as critic and vis-a-vis? Training of GANs are highly unstable and require a good amount of computation power. Try retraining it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants