You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We’re excited to announce the release of NannyML 0.8.4! This latest version comes packed with new features, and many bug fixes!
Installing / upgrading
You can get this latest version by using pip:
pip install -U nannyml
Or conda:
conda install -c conda-forge nannyml
What’s new?
Custom thresholds
One of the exciting new features is custom thresholds. When the metric calculated by NannyML exceeds the upper threshold or drops below the lower threshold, NannyML will mark that value as an alert.
We’ve introduced two ways of defining thresholds:
Simply setting a constant value as your lower or upper threshold
Creating a confidence band using the standard deviation of the metric values for your reference data set. Here we calculate the lower and upper threshold values by subtracting or adding a multiple of the standard deviation to the mean of the reference metric values.
A quick code snippet to illustrate:
importnannymlasnml# create some custom constant thresholdsconstant_threshold=nml.thresholds.ConstantThreshold(lower=None, upper=0.93)
# override the default threshold for f1-scoreestimator=nml.CBPE(
y_pred_proba='y_pred_proba',
y_pred='y_pred',
y_true='repaid',
timestamp_column_name='timestamp',
metrics=['f1'],
chunk_size=5000,
problem_type='classification_binary',
thresholds={
'f1': constant_threshold
}
)
Our Confusion Based Performance Estimator (CBPE) supports many metrics, all of them based on the confusion matrix. Getting the estimated confusion matrix directly, maybe to calculate some custom metric, was not possible. We’re happy to announce that you can now get the estimated confusion matrix from the CBPE estimator for binary classification use cases.
importnannymlasnml# adding 'confusion matrix' as metric to estimate.# normalize by the total number of observations for each predicted classestimator=nml.CBPE(
y_pred_proba='y_pred_proba',
y_pred='y_pred',
y_true='repaid',
timestamp_column_name='timestamp',
metrics=['roc_auc', 'confusion_matrix',],
chunk_size=5000,
problem_type='classification_binary',
normalize_confusion_matrix='pred',
)
We’ve introduced an easy way to compare NannyML results in a visual way using comparison plots. Just take two calculation results and smash them together like this:
We have a ton of fixes this release, thanks to our very own Nansters and multiple community members. Our heartfelt thanks to everybody chipping in!
You can browse through the lot in our changelog.
Some notable other changes:
We’ve improved the efficiency of our univariate drift detection methods. They no longer retain a copy of the reference data by default.
We’ve refreshed our included datasets and replaced the “work from home” set with the “car loan” dataset.
We improved the rendering of multi-index dataframes in the docs for improved legibility!
What’s next?
It had been a while since our last release but we promise the next ones are already right around the corner. We’ve been working on some “data quality” related features and some new metrics as well.
We hope you’ll enjoy the shiny new features we’ve introduced in NannyML 0.8.4. We welcome any feedback, good or bad! Reach out in our community Slack, log a bug or a feature request our repository or just leave us a star for good karma!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
We’re excited to announce the release of NannyML 0.8.4! This latest version comes packed with new features, and many bug fixes!
Installing / upgrading
You can get this latest version by using pip:
pip install -U nannyml
Or conda:
conda install -c conda-forge nannyml
What’s new?
Custom thresholds
One of the exciting new features is custom thresholds. When the metric calculated by NannyML exceeds the upper threshold or drops below the lower threshold, NannyML will mark that value as an alert.
We’ve introduced two ways of defining thresholds:
A quick code snippet to illustrate:
For a deeper dive into this feature, check our Custom Threshold Tutorial and “How it Works” page!
Confusion matrix estimation
Our Confusion Based Performance Estimator (CBPE) supports many metrics, all of them based on the confusion matrix. Getting the estimated confusion matrix directly, maybe to calculate some custom metric, was not possible. We’re happy to announce that you can now get the estimated confusion matrix from the CBPE estimator for binary classification use cases.
If you want to learn more about CBPE or just try it out, check our Estimating Binary Classification Performance Tutorial or the “How it Works” page.
Comparison plots
We’ve introduced an easy way to compare NannyML results in a visual way using comparison plots. Just take two calculation results and smash them together like this:
What’s changed?
We have a ton of fixes this release, thanks to our very own Nansters and multiple community members. Our heartfelt thanks to everybody chipping in!
You can browse through the lot in our changelog.
Some notable other changes:
We’ve improved the efficiency of our univariate drift detection methods. They no longer retain a copy of the reference data by default.
We’ve refreshed our included datasets and replaced the “work from home” set with the “car loan” dataset.
We improved the rendering of multi-index dataframes in the docs for improved legibility!
What’s next?
It had been a while since our last release but we promise the next ones are already right around the corner. We’ve been working on some “data quality” related features and some new metrics as well.
We hope you’ll enjoy the shiny new features we’ve introduced in NannyML 0.8.4. We welcome any feedback, good or bad! Reach out in our community Slack, log a bug or a feature request our repository or just leave us a star for good karma!
Beta Was this translation helpful? Give feedback.
All reactions