This project focuses on using Deep Learning techniques, specifically Convolutional Neural Networks (CNNs), to differentiate between real and synthetic X-ray images of hands. The initiative is aimed at enhancing the understanding and application of AI in medical imaging, with potential implications for training and healthcare technology.
The core of this project is the development and optimization of a CNN model that classifies hand X-ray images into real or synthetic categories, where the synthetic images are generated using Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). The repository contains detailed Jupyter notebooks covering data preprocessing, model training, hyperparameter tuning, and evaluation.
- Implementation of a custom CNN model for classification.
- Detailed process of model training and validation.
- Exploration of various hyperparameters and their impact on model performance.
- Analysis and classification of hand images into real or AI-generated categories.
notebooks/
: Contains the main Jupyter notebooks for model development and hyperparameter tuning.src/
: Source code for model, dataset preparation, and training utilities.model/
: Saved final CNN model after training.classified_hands.csv
: Results of the model predictions on the test dataset.
The dataset includes real hand X-ray images and synthetic images generated by VAEs and GANs. Due to GitHubs file size limitations access instructions to the datasets can be found in data/
folder.
To utilize this project:
- Clone the repository.
- Install necessary dependencies as listed in
requirements.txt
:pip install -r requirements.txt
- Execute the Jupyter notebooks in the
notebooks
directory to train the model or make predictions.
Contributions, suggestions, and feedback are highly encouraged. Feel free to open an issue or create a pull request for any improvements.
This project is released under the MIT License. For more information, please refer to the LICENSE file.