Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation over training dataset (explanatory power) #341

Open
dinilu opened this issue Jul 8, 2023 · 0 comments
Open

Evaluation over training dataset (explanatory power) #341

dinilu opened this issue Jul 8, 2023 · 0 comments

Comments

@dinilu
Copy link

dinilu commented Jul 8, 2023

When fitting models, I generally like to compare predictive power (evaluation metrics over the testing datasets) against explanatory power (evaluation metrics over the training datasets). This is also an indicator of overfitting. I dont see any function in the package that allow to calculate explanatory power, just predictive power in the models objects. Maybe I am missing something. If not, I would like to see this option in the package.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant