Explainable AI: Using Local Interpretable Model-agnostic Explanations (LIME) & SHapley Additive exPlanations (SHAP) #1109
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request for PyVerse 💡
Requesting to submit a pull request to the PyVerse repository.
Issue Title
Please enter the title of the issue related to your pull request.
Explainable AI: Using Local Interpretable Model-agnostic Explanations (LIME) & SHapley Additive exPlanations (SHAP)
Info about the Related Issue
What's the goal of the project?
Project Description
Explainable AI: Using LIME and SHAP
In the realm of machine learning, models often operate as "black boxes," making it difficult to understand how they arrive at their decisions. Explainable AI (XAI) seeks to demystify these models, providing insights into their inner workings. Two powerful techniques for achieving this are Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP).
LIME (Local Interpretable Model-Agnostic Explanations)
LIME focuses on explaining individual predictions rather than the entire model. It works by perturbing the input data and observing how the model's predictions change. LIME then fits a simple, interpretable model (like a linear model) to these perturbed instances and their corresponding predictions. This local model can be easily understood and provides insights into the factors that influenced the original model's prediction.
SHAP (SHapley Additive exPlanations)
SHAP, on the other hand, leverages game theory to assign importance to each feature in a model's prediction. It calculates Shapley values, which represent the average marginal contribution of a feature to the model's output across all possible feature combinations. By examining these Shapley values, we can understand how much each feature contributed to the final prediction.
Key Differences Between LIME and SHAP:
Feature | LIME | SHAP -- | -- | -- Focus | Local explanations for individual predictions | Global explanations for the entire model Model | Fits a simple, interpretable model locally | Uses game theory to calculate feature importance Visualization | Often uses bar charts or heatmaps to show feature importance | Uses force plots or decision plots to visualize feature contributions
When to Use LIME or SHAP:
LIME:
Ideal for understanding the reasons behind specific predictions.
Useful for models that are difficult to interpret directly.
Can be applied to a wide range of models, including deep neural networks.
SHAP:
Provides a global understanding of feature importance across the entire dataset.
Can be used to identify the most influential features for a given model.
Offers a more rigorous and mathematically sound approach to feature attribution.
Real-World Applications:
Healthcare: Understanding why a model predicts a certain disease diagnosis.
Finance: Explaining credit decisions or stock price predictions.
Autonomous Vehicles: Interpreting the reasons behind a self-driving car's actions.
Criminal Justice: Assessing the fairness of algorithmic decision-making.
By using LIME and SHAP, we can enhance the transparency, accountability, and trust in AI systems. These techniques empower us to make informed decisions, identify biases, and improve the overall performance of machine learning models.
Name
Please mention your name.
inkerton
GitHub ID
Please mention your GitHub ID.
inkerton
Email ID
Please mention your email ID for further communication.
janvichoudhary116@gmail.com
Identify Yourself
Mention in which program you are contributing (e.g., WoB, GSSOC, SSOC, SWOC).
GSSOC
Closes
Enter the issue number that will be closed through this PR.
Closes: #issue-number #1085
Type of Change
Select the type of change:
Checklist
Please confirm the following: