Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusing example in documentation #19939

Closed
rishijain07 opened this issue Jun 30, 2024 · 2 comments
Closed

Confusing example in documentation #19939

rishijain07 opened this issue Jun 30, 2024 · 2 comments
Assignees

Comments

@rishijain07
Copy link

image
This example is quite confusing. It is more complex than traditional logistic regression. In a true logistic regression, there would typically be a single output neuron with a sigmoid activation for binary classification, or multiple output neurons without a hidden layer for multi-class problems. It might be better to replace this example with something else, or clarify the documentation by adding "input layer with 32 neurons connected with 16 fully connected neurons (dense layer)."

@heydaari
Copy link

heydaari commented Jul 1, 2024

Hello there
Keras Input Layers dont have any learnable parameters , so you can not call them "hidden layer" . Input layers are just model gates for receiving data and are helpful for knowing the input shape of whatever you want to feed the model . also they are helpful when you have multiple inputs when the model architecture is functional ( see the documents)

however , in this example , you can simply use keras Sequential API and only use one Dense layer as you said for logistic regression , but you have to specify the input shape with the input_shape parameter in that Dense layer

@rishijain07
Copy link
Author

Ohh got it thanks :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants