-
Notifications
You must be signed in to change notification settings - Fork 396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bayesian Opt Tuner does not show the same results for 'get_best_hyperparameters' and 'get_best_models' methods #919
Comments
After some checking, I am thinking this might be the hypermodel itself. I believe I am not understanding the intrinsic of creating conditional parameters through a for-loop (i.e. number of stacked layers)
Will create a new hyperparameters, more specifically 3, the first one, when c==0, which means "no LSTM" layers. The second one with a single LSTM layer (c==1) and the last one with 2 LSTM layers (c==2). Yet, it seems after checking that the mismatch seen above comes from this loop |
I am tuning a HM with Keras-Tuner BO, once searched the space, I realized that the 'tuner' object I used does not seems to return the same HM when I write tuner.get_best_hyperparameters(1) compared with tuner.get_best_models(). Am I interpreting something wrong?
I would expect that once searched the space with the Bayesian tuner, the variable containg the tuner displays the same hypermodel with both methods: 'get_best_hyperparameters' and 'get_best_models'
Additional context
The text was updated successfully, but these errors were encountered: