Releases: interpretml/interpret-community
release v0.23.0
- fix sphinx doc build failures in interpret-community
- sort imports using isort
- add more dependencies to docs build to fix warnings
- remove numba dependency pin
- add more flake8 extensions
- fix multiple save()/load() bug
- add flake8-breakpoint to avoid code checkin with active breakpoints
release v0.22.0
- version bump for shap 0.40.0 and interpret-core 0.2.7
- removed the pin on lightgbm package due to recent build break with new lightgbm release 3.3.1 and fixed the serialization logic to handle the new lightgbm model version
- upgraded tensorflow and xgboost test dependencies
- added support for keras scikit classifier and regressor to model wrapper
- fixed nightly build breaking due to new scikit-learn package which breaks model function serialization
release v0.21.0
- add an explanation adapter to integrate our explanations with other frameworks
- update interpret-community to interpret-core 0.2.6
- change explainers tabular explainer runs based on gpu flag
- fix model wrapper to handle a pytorch binary classification model that only outputs probabilities for positive class
- add test for old explanation dashboard and interpret dashboard
- fix nightly build breaking due to new scikit-learn package, which breaks model function serialization, by always serializing the model directly instead of the function if a model was passed to us
BREAKING CHANGE: function wrap_model now just returns the wrapped model
release v0.20.0
-
Removed old ExplanationDashboard. The old namespace still exists but widget won't display anything and will print a warning now. Please use the ExplanationDashboard from the raiwidgets package instead.
Please install raiwidgets from pypi by running:
pip install --upgrade raiwidgets
The dashboard can be run with the same parameters in the new namespace:
from raiwidgets import ExplanationDashboard
For more information on the new widget please see: https://github.com/microsoft/responsible-ai-widgets
-
Fixed raw aggregated explanation failing to compute when transformations passed to mimic explainer and include_local=False
release v0.19.3
- emergency hotfix to pin numba to less than 0.54.0 to fix shap failures
- update to check correct version of RAPIDS
- update GPU SHAP for kmeans sampling code from cuML
- fix explanation dashboard failing to run on dataset with boolean target labels
release v0.19.2
- fix aggregation logic for raw global importance values to be consistent with local importance values
release v0.19.1
- fix error when creating a raw explanation from an engineered explanation that does not have a DatasetsMixin
release v0.19.0
- Update interpret-core to 0.2.5
- Add predicted and predicted probabilities to raw explanations to allow model performance tab and other tabs to show information related to y_pred/y_pred_proba for raw explanations
- Update cuML version for GPUKernelExplainer
- Remove shap DenseData from supported input data
- Use scipy logit() function in mimic explainer to fix divide by zero error
release v0.18.1
- includes fix to implement sparse case for methods:
- get_ranked_local_values
- get_ranked_local_names
- get_local_importance_rank
and compress local importance values to dense format based on whether it will be more optimal storage when converting an engineered explanation to a raw explanation
- remove another spurious cuML warning message on library import.
release v0.18.0
- upgrade shap to 0.39.0, which deprecated python 3.5 and added python 3.8 support
- Add cuML GPU SHAP KernelExplainer
- update abstract classes to use ABC instead of ABCMeta
- remove warning on mimic explainer relating to categorical features
- Add test case of serializing pandas timestamp
- add sparse feature importance support for lightgbm surrogate model
- add support for drop parameter in one hot encoder