Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Trying to share variable rnnlm/multi_rnn_cell/cell_0/basic_lstm_cell/kernel, but specified shape (512, 1024) and found shape (259, 1024). #26

Open
lxtGH opened this issue Oct 26, 2017 · 3 comments

Comments

@lxtGH
Copy link

lxtGH commented Oct 26, 2017

When I run train.py, got this errors. I think is kind of version problems, my tf version is 1.3

File "/home/lxt/tf_project/HyperNetwork/write-rnn-tensorflow/model.py", line 50, in init
outputs, state_out = tf.contrib.legacy_seq2seq.rnn_decoder(inputs, self.state_in, cell, loop_function=None, scope='rnnlm')
File "/home/lxt/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 152, in rnn_decoder
output, state = cell(inp, state)

@o0starshine0o
Copy link

I have came across the same problem

@yaylinda
Copy link
Contributor

yaylinda commented Nov 6, 2017

I have the same error, while running sample.py, also using tf 1.3.0

@grisaitis
Copy link
Contributor

I got this error as well but I think it was fixed by changing
cell = tf.contrib.rnn.MultiRNNCell([get_cell() for _ in range(args.rnn_size)])
to
cell = tf.contrib.rnn.MultiRNNCell([get_cell() for _ in range(args.num_layers)])
in model.py.

This was implemented in PRs #28 or #30 and is now part of master

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants