-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue in TrainerD #1
Comments
Hi mate, Looks like you need to re-run all the components from the above sections of the Jupyter Notebook as it doesn't seem like the tensors are in the working memory of the Kernel. |
That's definitely what fixes a lot of TF graph issues, but in this case, I think it's an issue with how TF version 1.1 handles scoping. Go ahead and take a look at the last commit. That should fix it. |
I have tf 1.1.0. I try to run the last version of the code and I get the following error: `--------------------------------------------------------------------------- /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_impl.pyc in sigmoid_cross_entropy_with_logits(_sentinel, labels, logits, name) /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.pyc in _ensure_xent_args(name, sentinel, labels, logits) ValueError: Only call With the previous commit (e2b9c7f), I got the error that was reported at the beginning of this post. I am new with tf, so I don't know whether I fixed the issues with the scopes, but I did the following script: https://drive.google.com/open?id=0B8gMQqp3oacBN25qMHpkbE1aam8 Now the problem is that no mater what are the values of the noise vector that I feed into the generator, it always output the same image. Any suggestions/ideas what can be wrong? |
Hopefully the latest commit should fix the labels and logits error. As for the outputting of the same image, that is a well known problem of mode collapse. The implementation of GANs in this repo isn't the most optimized code, per se. I emphasized simplicity over that. If you want to see repos with better performance in terms of image quality, be sure to check out https://github.com/carpedm20/DCGAN-tensorflow and https://github.com/soumith/ganhacks for #ganhacks :) |
@adeshpande3 thank you for your comment. Now the problem of the logits is solve, but if I run it, I still have the problem that the optimizer does not find the variables names:
I am not sure weather it is correct, but can I suggest you to wrap the discriminator and generator function with a tf.variable_scope? I did on this way, and it works:
and
Also later: But if I run the code, the generated images are strange, so I am not sure if my solution is correct. |
Thanks for the suggestion. I agree that that's the best way to fix the problem. As for the strange generated images, I'm not particularly sure what the problem might be. Like I said before, this isn't the most optimized and hyperparameter tuned code, so It could be anything from the structure of the network to length of training time to application of batch norm, etc. |
I just started to work with GAN almost a week ago, so I am not an expert. But it seems that one of the biggest problems comes with the discriminator function. It is very easy to be in that part of the sigmoid curve where it is saturated, so the generated distribution cannot be move to the real one: https://drive.google.com/open?id=0B8gMQqp3oacBMWZvSURuQTJjY2M The image is from the paper Wassertain GAN. In the image, the red curve is the decision boundary of a GAN based on a sigmoid discriminator, in light blue is the Earth Mover or Wasserstein distance of the critic. As you can see, it is not flat like the sigmoid, so the generated distribution can be moved toward the real one. I am going to try to implement it based in you code, let's see whether it is better. Thank for your nice tutorial :) |
Hi,
Iam getting an error at the line trainerD = tf.train.AdamOptimizer().minimze(d_loss,var_list=d_vars)
Please look at the screenshot attached.
Thanks
The text was updated successfully, but these errors were encountered: