FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
-
Updated
Oct 20, 2021 - Jupyter Notebook
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Introduction to trusted AI. Learn to use fairness algorithms to reduce and mitigate bias in data and models with aif360 and explain models with aix360
Trying out things on Kaggle's Titanic dataset
Responsible AI Masterclass (June 2024 Run)
Fairness in data, and machine learning algorithms is critical to building safe and responsible AI systems from the ground up by design. Both technical and business AI stakeholders are in constant pursuit of fairness to ensure they meaningfully address problems like AI bias. While accuracy is one metric for evaluating the accuracy of a machine le…
Fairness Analysis in US Mortgage Lending with Machine Learning Algorithms
Visualising Accuracy vs. Fairness in ML models using AIF360 tools and dataset
Building Fair AI models tutorial at PyData Berlin / REVISION 2018
Gender classification model that uses a CNN to classify images of faces as male or female. The notebook includes code for data preprocessing, model architecture, training, and evaluation which will then be used for algorithmic bias detection.
This notebook represents my personal code, notes, and reflections for the Manning liveProject titled "Mitigate Machine Learning Bias: Shap and AIF360" by Michael McKenna. Any citations or references to original course material retain the original author copyright and ownership. Personal code is licensed under the MIT License.
Evaluating Fairness in Machine Learning: Comparative Analysis of Fairlearn and AIF360
Add a description, image, and links to the aif360 topic page so that developers can more easily learn about it.
To associate your repository with the aif360 topic, visit your repo's landing page and select "manage topics."