Project work for the MAI4CAREU organization, done in collaboration with University of Bologna and University of Cyprus.
Understading the behaviour behind a machine learning model is extremely useful for many reasons:
- enhance the trust and the confidence of the users
- allow us to identify issues of the model
- identify biases in the dataset
In this project, we will focus on the first point, as user trust is extremely important for the application of deep learning in the medical sector.
- Daniele Marini
- Luca Reggiani
- Asfa Jamil
The main purpose of the project is to explore different techniques for explaine black-box models used for medical diagnosis. For our project we explored the following techniques:
- Model Agnostic techniques
- LIME
- SHAP
- Case based Reasoning Techniques
- Case-based Ensemble Learning System
- Visual Case-Based Reasoning Approach
- Backpropagation-based techniques
- Grad-CAM
- CAM
- Score-CAM
- DeepLIFT
- Grad-CAM
In addition we tested the localization ability of Grad-CAM on a model fine-tuned by us
We conducted an experiment with the objective of testing the impact of utilizing Grad-CAM in the context of our study. To achieve this, we applied Grad-CAM to ResNet50 , a widely used convolutional neural network architecture.
The model used for the classification is ResNet50, which has been fine tuned by using the "Brain MRI images for brain tumor detection" dataset.
The results of the experiment were highly satisfactory, demonstrating a remarkable ability to accurately localize the tumor's exact region as we can see in figure below.