This repository is from the 2024 course "Deep Neural Network Analysis" at the university of Osnabrück, held by Lukas Niehaus. The topic of this repo is Restrict Discrimination and Enhance Fairness in ML models. We present two postprocessing methods, equalized odds, and calibrated equalized odds, via their implementation in the AIF360 toolkit.
How they work in detail is explained in the presentation pdf found in the course.
-
Clone the repository:
git clone https://github.com/HenningSte/fairness_equal_odds.git
-
Navigate to the project directory:
cd fairness_equal_odds
-
Install dependencies (we used Python version 3.11.9, to be sure use a version between 3.8 -3.11):
pip install -r requirements.txt
-
Download the model checkpoints folder from github.com/lucasld/neural_network_analysis.
-
Put the folder in the same directory as this repository:
/parent_folder /neural_network_analysis /fairness_equal_odds
-
AIF360 doesn't come with the raw dataset files needed for it's load dataset methods. As described in the notebook (or here), you will have to download the files yourself and place them in the corresponding folder in your aif360 install in your python environment folder after setting up your python environment.
Using for example a virtual environment and the german dataset, the file structure should look like this:
\fairness_equal_odds\.venv\Lib\site-packages\aif360\data\raw\german\german.data
The files can be found here:
We have implemented both Equal odds and calibrated odds in 2 notebooks for different classification models. In the AIF360 demo notebook we present both methods performance using a simple regression model trained on well suited datasets for fairness analysis, such as german credit score data or the COMPAS Recidivism Risk Scores dataset.
In the second notebook we present the same methods for the wine quality dataset and model from this course, found here repository. Here, using fairness enhancement methods and comparing the outcomes makes less sense (even though both methods still perform rather well), but it serves more as a demonstration for how to integrate your own data and models with AIF360.
Aditionally we provide a number of helper functions and handler classes for both methods in the utils.py
and eop.py
& ceop.py
files. These might be useful should you want to use either method for your own use case.
Henning Stegemann henstegemann@uni-osnabrueck.de
Imogen Hüsing ihuesing@uni-osnabrueck.de
https://github.com/Trusted-AI/AIF360
https://github.com/lucasld/neural_network_analysis/tree/main/
https://github.com/madammann/DNNA-Blackbox-Interpretability---LIME/