Skip to content

This setup provides a basic integration of SHAP into your model training process, enabling you to gain insights into how different features affect the model's predictions. You can further customize and extend the XAI component based on your specific needs and requirements.

License

Notifications You must be signed in to change notification settings

zamaex96/Explainable-AI-XAI-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

To incorporate Explainable AI (XAI) into your training process, we can use tools and libraries such as SHAP (SHapley Additive exPlanations) to interpret the model's predictions. SHAP values help in understanding the contribution of each feature to the model's output.

Hybrid CNN-LSTM Model Training with SHAP Integration

Overview

This repository contains Python code that demonstrates the training of a hybrid Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) model using PyTorch. The model is designed for sequence classification tasks and integrates SHAP (SHapley Additive exPlanations) for explainable AI (XAI).

Key Features

  • Model Architecture:

    • HybridCNNLSTM class combines CNN and LSTM layers for feature extraction and sequential learning.
    • CNN layers include 1D convolutions and max-pooling, followed by LSTM layers for temporal processing.
    • Final classification is performed using a fully connected layer.
  • Training Process:

    • Data Loading: Utilizes DataLoader to handle training and testing datasets from CSV files (train_set.csv and test_set.csv).
    • Model Training:
      • CrossEntropyLoss and SGD optimizer are employed for training.
      • Training loop iterates over epochs, computing loss, and updating model parameters.
      • Accuracy metrics are calculated for both training and testing phases.
  • SHAP Integration:

    • SHAP DeepExplainer is used to interpret model predictions.
    • SHAP values are computed during training to visualize feature contributions using shap.summary_plot.
  • Visualization:

    • Loss and accuracy metrics are plotted over epochs to monitor model performance.
    • Plots are saved as EPS files (train_accuracy_loss_hybrid_plot.eps).

Here is how you can modify your code to include SHAP values for explaining the model's predictions:

  1. Install SHAP: pip install shap

  2. Modify the Training Script to integrate SHAP.

  3. Key points:

  • SHAP Integration: The shap.DeepExplainer is used to interpret the model's predictions. A background dataset is used for the explainer, and SHAP values are calculated and visualized during the training process.

  • Summary Plot:

  • shap.summary_plot is called to visualize the SHAP values.

  • This provides insights into feature contributions. This setup provides a basic integration of SHAP into your model training process, enabling you to gain insights into how different features affect the model's predictions. You can further customize and extend the XAI component based on your specific needs and requirements.

gh

Explore More! 🚀

About

This setup provides a basic integration of SHAP into your model training process, enabling you to gain insights into how different features affect the model's predictions. You can further customize and extend the XAI component based on your specific needs and requirements.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages