Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New interpretable algorithm #398

Open
mathias-von-ottenbreit opened this issue Jun 13, 2024 · 0 comments
Open

New interpretable algorithm #398

mathias-von-ottenbreit opened this issue Jun 13, 2024 · 0 comments

Comments

@mathias-von-ottenbreit
Copy link

Hi.

I would like to contribute with a section in Chapter 5 about Automatic Piecewise Linear Regression (APLR). It can be used for regression or classification. APLR is often able to compete with tree-based methods on predictiveness and it is inherently interpretable. The algorithm automatically handles interactions and performs variable selection.

Below are some of the possibilities in APLR regarding interpretability:

  • Get the shapes of main effects and interactions (for each effect, relevant predictor values and contributions to the linear predictor). For main effects and two-way interactions this is easy to plot in charts.
  • See the regression coefficients and estimated importance of each term in the model.
  • Compute the local feature contribution to the linear predictor on arbitrary data (training data, test data, new data etc.).

Is that ok?

Best regards,
Mathias

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant