Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Code Addition Request]: Predicting Obesity Risk and Identifying Contributing Factors through XAI Techniques #1117

Closed
3 tasks done
inkerton opened this issue Nov 7, 2024 · 2 comments

Comments

@inkerton
Copy link
Contributor

inkerton commented Nov 7, 2024

Have you completed your first issue?

  • I have completed my first issue

Guidelines

  • I have read the guidelines
  • I have the link to my latest merged PR

Latest Merged PR Link

#1109

Project Description

Background:

The continuous increase in obesity implies the growing costs and risks for individuals, society, and businesses. Furthermore, tackling obesity is one of the top government priorities in many nations. Obesity is a prevalent health issue globally, contributing to various chronic diseases and reducing overall quality of life. The goal is to prevent, reduce, and tackle obesity by helping people build healthy eating and physical habits.

Project Objective:

The objective of this project is to develop a machine learning model capable of predicting the risk of obesity across multiple classes. By accurately 1) predicting obesity risk & 2) explaining the model's results, we aim to empower individuals with insights into their health status and provide healthcare professionals with a tool for early intervention and personalized recommendations.

Summary of XAI Techniques applied:

XAI Method | Type | Description -- | -- | -- Permutation Feature Importance (PFI) | Global | Assess the importance of input features by measuring the change in model performance when the values of those features are randomly permuted.For example, if the model's accuracy drops a lot when a feature is shuffled, it means that feature is very important. SHapley Additive exPlanations (SHAP) | Global | Shows how much each feature contributes to a model's prediction by considering all possible combinations of features and their interactions.Features with positive SHAP values positively impact the prediction, while those with negative values have a negative impact. Partial Dependence Plot (PDP) | Global | Shows how changes in one feature affect a model's prediction while keeping other features constant.For example, a flat line implies little or no impact, while an upward slope indicates a positive influence. Local Interpretable Model-agnostic Explanations (LIME) | Local | Explains individual predictions of a model by approximating its behavior with a simpler, understandable model around a specific data point (local). Diverse Counterfactual Explanations (DiCE) | Local | Generates alternative or "what-if" scenarios to explain why a model made a specific prediction, offering insights into how changes in input features could lead to different outcomes.

Full Name

inkerton

Participant Role

GSSOC

Copy link

github-actions bot commented Nov 7, 2024

🙌 Thank you for bringing this issue to our attention! We appreciate your input and will investigate it as soon as possible.

Feel free to join our community on Discord to discuss more!

@UTSAVS26 UTSAVS26 closed this as not planned Won't fix, can't repro, duplicate, stale Nov 8, 2024
Copy link

github-actions bot commented Nov 8, 2024

✅ This issue has been closed. Thank you for your contribution! If you have any further questions or issues, feel free to join our community on Discord to discuss more!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants