-
Hello! I ran the code in pix2pix.ipynb for the facades data and noticed that my results were much worse than results from the pretrained model's. Has anybody been able to replicate the image quality of the pretrained models, and if so, how did they do it? (Ideally the authors will chime in, but I'd love to hear from anybody!). Thank you! |
Beta Was this translation helpful? Give feedback.
Answered by
mccallion
Oct 31, 2021
Replies: 1 comment 2 replies
-
Found the answer – I wasn't using the author's suggested params after all! Instead of their batch size = 1, I was using batch size = 16 and getting worse results. Using all of their code out of the box without modifications worked. |
Beta Was this translation helpful? Give feedback.
2 replies
Answer selected by
mccallion
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Found the answer – I wasn't using the author's suggested params after all! Instead of their batch size = 1, I was using batch size = 16 and getting worse results. Using all of their code out of the box without modifications worked.