Introduction to trusted AI. Learn to use fairness algorithms to reduce and mitigate bias in data and models with aif360 and explain models with aix360
-
Updated
Dec 14, 2020 - Jupyter Notebook
Introduction to trusted AI. Learn to use fairness algorithms to reduce and mitigate bias in data and models with aif360 and explain models with aix360
GBDF: Gender Balanced DeepFake Dataset
Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.
Code and data accompanying the paper: "Model-Agnostic Bias Measurement in Link Prediction" published in the EACL Findings 2023
Add a description, image, and links to the bias-measurement topic page so that developers can more easily learn about it.
To associate your repository with the bias-measurement topic, visit your repo's landing page and select "manage topics."