Code for the project of the course CMU 16-824: Visual Learning and Recognition. In this project, we study the relationship between pruning and explainability. We validate if the explanations generated from the pruned network using Lottery ticket hypothesis (LTH) are consistent or not. Specifically we prune a neural network using LTH. Next we generate and compare the local and global explanations using Grad-CAM and Concept activations respectively. Overview of our method is as follows:
Follow the link.
Create the environment from the environment.yml file:
conda env create -f environment.yml
- Download the CUB-200 dataset from the link
- Preprocess the noisy concepts in the dataset using the following command:
cd scripts_data
python download_cub.py
- Download and create the
even / odd
dataset using the following command:
cd scripts_data
python download_mnist_E_O.py
The train-test-val splits of all the datasets are given in the corresponding json files in the scripts_data
directory.
- For CUB-200, check
config/BB_cub.yml
file - For MNIST Even / odd, check
config/BB_mnist.yml
file
- Prior to start the training process, edit
data_root
,json_root
andlogs
parameters in the config fileconfig/BB_cub.yaml
to set the path of images, json files for train-test-val splits and the output to be saved respectively. - Prior to follow the steps, refer to
./iPython/Cub-Dataset-understanding.ipynb
file to understand the CUB-200 dataset. This step is optional. - Preprocess the noisy concepts as described earlier.
- Follow the steps below for CUB-200 dataset:
python main_lth_pruning.py --config "config/BB_cub.yaml"
python main_lth_test.py --config "config/BB_cub.yaml"
python main_lth_save_activations.py --config "config/BB_cub.yaml"
python main_lth_generate_cavs.py --config "config/BB_cub.yaml"
Edit labels_for_tcav
and concepts_for_tcav
parameters in the file config/BB_cub.yaml
for the desired class label
and concept label to generate the TCAV score for
python main_lth_tcav.py --config "config/BB_cub.yaml"
Edit labels_for_tcav
parameter in the file config/BB_cub.yaml
for the desired class label to generate the Grad-CAM
saliency maps. By default, we generate the saliency map for the 2nd image of the desired class in the test-set.
python main_heatmap_save.py --config "config/BB_cub.yaml"
./iPython/Analysis-CUB_Test-GradCAM.ipynb
All the bash scripts to follow steps are included in ./bash_script
file.
Get the pretrained models following
the link
. This link contains all check points of all the pre-trained pruned models for CUB-200 dataset. Download it and give the
path till results
in log
parameter ./config/BB_cub.yaml
file.
shawn24@bu.edu (preferred)
, shg121@pitt.edu, beingshantanu2406@gmail.com (personal)
, shantan2@andrew.cmu.edu
Licensed under the MIT License
Copyright (c) Shantanu , 2022