Skip to content

This research involves creating synthetic medical images with random anomalies, training a CNN to detect these anomalies, and using LIME and SHAP for model interpretability. The goal is to enhance transparency and trustworthiness in medical imaging by understanding the model's decision-making process.

License

Notifications You must be signed in to change notification settings

Isuleim/Enhancing-Medical-Imaging-Model-Interpretability-with-Synthetic-Data-and-Explainable-AI-Techniques

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Enhancing-Medical-Imaging-Model-Interpretability-with-Synthetic-Data-and-Explainable-AI-Techniques

This research involves creating synthetic medical images with random anomalies, training a CNN to detect these anomalies, and using LIME and SHAP for model interpretability. The goal is to enhance transparency and trustworthiness in medical imaging by understanding the model's decision-making process.

About

This research involves creating synthetic medical images with random anomalies, training a CNN to detect these anomalies, and using LIME and SHAP for model interpretability. The goal is to enhance transparency and trustworthiness in medical imaging by understanding the model's decision-making process.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published