Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keras fails to train on custom generator (Torch backend) #19929

Closed
DLumi opened this issue Jun 27, 2024 · 2 comments · Fixed by #19945
Closed

Keras fails to train on custom generator (Torch backend) #19929

DLumi opened this issue Jun 27, 2024 · 2 comments · Fixed by #19945
Assignees

Comments

@DLumi
Copy link

DLumi commented Jun 27, 2024

Keras model would throw an error on training when using Torch backend and a custom (Python) data generator.

Steps to reproduce:

  1. Define a model
  2. Define a simple data generator that yields one batch of random data
  3. Start training with epoch=1 and steps_per_epoch=1
  4. Get an UnboundLocalError: local variable 'logs' referenced before assignment suggesting that the training loop itself was skipped, and epoch_iterator.enumerate_epoch() has issues wrapping provided generator.

The same steps would work just fine with TF.

Colab

Edit: torch==2.3 seems to be incompatible with Keras, but it would have the same fail on 2.2 which should be supported

@DLumi
Copy link
Author

DLumi commented Jun 27, 2024

Seems to be the data adapter issue. Can't seem to make the training work in Torch regardless of what data I use. I tried feeding both torch's DataLoaders and tf's Datasets, and the results are the same.

@Grvzard
Copy link
Contributor

Grvzard commented Jul 1, 2024

a temporary workaround is to specify the batch_size explicitly in Model.fit().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants