This is the official implementation for Guided AbsoluteGrad. For more details, please refer to the paper Guided AbsoluteGrad: Magnitude of GradientsMatters to Explanation’s Localization and Saliency.
cv_exp/gradient/_guided_absolute_grad.py
: implementation of the Guided AbsoluteGrad;cv_exp/eval/rcap_eval.py
: implementation of RCAP evaluation for saliency maps;
Please run data_preparation.ipynb
.
If everything is correct, you should have the following structure in the data:
The isic_exp/weights
folder should contain:
The places365_exp/pretrained
folder should contain:
Please run imagenet_demo.ipynb
, isic_demo.ipynb
, places365_demo.ipynb
.
Figures in the paper can be partially reproduced from the demo.
To reproduce the experiment, run the command:
python cv_exp/exp.py -t sa -mk [ModelKey] -dk [DatasetKey] -m [Mode] -c [CaseName] [SettingKeys]
-
ModelKey: The explained model key. It should be the key defined in the
settings.py
of each case.For instance,
resnet50
is defined in theget_model()
function ofcases/imagenet_exp/settings.py
. -
DatasetKey: Similar to the ModelKey. It should be the key defined in the
get_dataset()
function. -
Mode: should be one of:
x
: run xai explanation to get all saliency map results;e
: run RCAP evaluation;a
: run AUC evaluation;s
: run thex
and thee
at the same time;sn
: run sanity check evaluation;
-
CaseName: the name of cases. It is just for naming the experiment result dir. It could be anything.
-
SettingKeys: experiment keys defined in each case's
settings.py
file. If not provided, it will run for all experiments.
Example:
Run all experiments of the ImageNet case for XAI and the RCAP evaluation.
python main.py -mk resnet50 -dk imagenet_seg -t sa -m s -c imagenet
Run only the 2.a_places365
experiments of the Places365 case.
python main.py -mk densenet161 -dk places365 -t sa -m x -c places365 2.a_places365
The RCAP should be calculated finally by the batch_rcap(...)
function in the cv_exp/eval/rcap_eval.py
The evaluation_result.ipynb
can run this function and plot all case results.
If all experiments are reproduced, one can verify the experiment results with the file evaluation_result_author_side.ipynb
@article{Huang2024Guided,
author = {Huang, Jun and Liu, Yan},
journal = {Proceedings of the Canadian Conference on Artificial Intelligence},
year = {2024},
month = {may 27},
note = {https://caiac.pubpub.org/pub/8rzaolsn},
publisher = {Canadian Artificial Intelligence Association (CAIAC)},
title = {Guided {AbsoluteGrad} : Magnitude of {GradientsMatters} to {Explanation}\textquoteright{}s {Localization} and {Saliency}},
}