Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Input dimension of LSTM differs from the original paper #4

Open
BigWZhu opened this issue Feb 13, 2020 · 0 comments
Open

Input dimension of LSTM differs from the original paper #4

BigWZhu opened this issue Feb 13, 2020 · 0 comments

Comments

@BigWZhu
Copy link

BigWZhu commented Feb 13, 2020

As I read the original paper and the repo from deepmind, it seems to me that the LSTM optimizer should only take 1 variable as input to optimize and save the LSTM state for each variables. In other words, with an arbitrary number of parameters, it just updates one after another. While in this implementation, the dimension of the optimizer is fixed as the number of the optimizee.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant