You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The continuous increase in obesity implies the growing costs and risks for individuals, society, and businesses. Furthermore, tackling obesity is one of the top government priorities in many nations. Obesity is a prevalent health issue globally, contributing to various chronic diseases and reducing overall quality of life. The goal is to prevent, reduce, and tackle obesity by helping people build healthy eating and physical habits.
Project Objective:
The objective of this project is to develop a machine learning model capable of predicting the risk of obesity across multiple classes. By accurately 1) predicting obesity risk & 2) explaining the model's results, we aim to empower individuals with insights into their health status and provide healthcare professionals with a tool for early intervention and personalized recommendations.
Summary of XAI Techniques applied:
XAI Method | Type | Description
-- | -- | --
Permutation Feature Importance (PFI) | Global | Assess the importance of input features by measuring the change in model performance when the values of those features are randomly permuted.For example, if the model's accuracy drops a lot when a feature is shuffled, it means that feature is very important.
SHapley Additive exPlanations (SHAP) | Global | Shows how much each feature contributes to a model's prediction by considering all possible combinations of features and their interactions.Features with positive SHAP values positively impact the prediction, while those with negative values have a negative impact.
Partial Dependence Plot (PDP) | Global | Shows how changes in one feature affect a model's prediction while keeping other features constant.For example, a flat line implies little or no impact, while an upward slope indicates a positive influence.
Local Interpretable Model-agnostic Explanations (LIME) | Local | Explains individual predictions of a model by approximating its behavior with a simpler, understandable model around a specific data point (local).
Diverse Counterfactual Explanations (DiCE) | Local | Generates alternative or "what-if" scenarios to explain why a model made a specific prediction, offering insights into how changes in input features could lead to different outcomes.
Full Name
inkerton
Participant Role
GSSOC
The text was updated successfully, but these errors were encountered:
✅ This issue has been closed. Thank you for your contribution! If you have any further questions or issues, feel free to join our community on Discord to discuss more!
Have you completed your first issue?
Guidelines
Latest Merged PR Link
#1109
Project Description
Background:
The continuous increase in obesity implies the growing costs and risks for individuals, society, and businesses. Furthermore, tackling obesity is one of the top government priorities in many nations. Obesity is a prevalent health issue globally, contributing to various chronic diseases and reducing overall quality of life. The goal is to prevent, reduce, and tackle obesity by helping people build healthy eating and physical habits.
Project Objective:
The objective of this project is to develop a machine learning model capable of predicting the risk of obesity across multiple classes. By accurately 1) predicting obesity risk & 2) explaining the model's results, we aim to empower individuals with insights into their health status and provide healthcare professionals with a tool for early intervention and personalized recommendations.
Summary of XAI Techniques applied:
XAI Method | Type | Description -- | -- | -- Permutation Feature Importance (PFI) | Global | Assess the importance of input features by measuring the change in model performance when the values of those features are randomly permuted.For example, if the model's accuracy drops a lot when a feature is shuffled, it means that feature is very important. SHapley Additive exPlanations (SHAP) | Global | Shows how much each feature contributes to a model's prediction by considering all possible combinations of features and their interactions.Features with positive SHAP values positively impact the prediction, while those with negative values have a negative impact. Partial Dependence Plot (PDP) | Global | Shows how changes in one feature affect a model's prediction while keeping other features constant.For example, a flat line implies little or no impact, while an upward slope indicates a positive influence. Local Interpretable Model-agnostic Explanations (LIME) | Local | Explains individual predictions of a model by approximating its behavior with a simpler, understandable model around a specific data point (local). Diverse Counterfactual Explanations (DiCE) | Local | Generates alternative or "what-if" scenarios to explain why a model made a specific prediction, offering insights into how changes in input features could lead to different outcomes.Full Name
inkerton
Participant Role
GSSOC
The text was updated successfully, but these errors were encountered: