Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can we calculate KL divergence of VampPrior and posterior without sampling? #5

Open
ShellingFord221 opened this issue Aug 2, 2021 · 0 comments

Comments

@ShellingFord221
Copy link

ShellingFord221 commented Aug 2, 2021

Hi, in line 78~80 of VAE.py, when calculating the KL divergence of VampPrior p(z) and posterior q(z|x), we put z_q into N(z_p_mean, z_p_logvar) and N(z_q_mean, z_q_logvar) then calculate the difference of log outputs of two Gaussian function.

log_p_z = self.log_p_z(z_q)

log_q_z = log_Normal_diag(z_q, z_q_mean, z_q_logvar, dim=1)

KL = -(log_p_z - log_q_z)

Since we have already get mean and variance of prior and posterior, can we directly calculate KL divergence of two Gaussian distribution? i.e.

image

This is because in line 226, z_q is just drawn from N(z_q_mean, z_q_logvar):

z_q = self.reparameterize(z_q_mean, z_q_logvar)

So can we skip this step? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant