Neural networks are mathematical models (often implemented as computer programs) that are commonly used to make predictions on data, after having been trained on past data and outcomes. Using GPU acceleration is a common method to speed up training neural networks.
Medical imaging (MRI, X-ray, etc.) is useful in many medical applications. However, radiologist interpretations of medical images are not perfect. Computer-aided diagnosis (CAD) is an approach that uses software to analyze medical images as a second pair of eyes for a radiologist.
Neural networks can be used for CAD and can be trained to make predictions about medical images. U-net is a very good neural network for medical image segmentation, which shows where in an image a feature of interest exists. An example of medical image segmentation is predicting which pixels in a medical image depict a tumor.
Training U-net on medical images benefits greatly from GPU acceleration. Having a limited setup in terms of GPUs can lead to running out of VRAM while training the traditional U-net. A possible solution is to train a smaller version of U-net.
We wanted to test how reducing the size of U-net would affect performance on tumor segmentation in brain MRI. We reduced the number of filters in the convolutional layers in U-net, which reduces the amount of VRAM required. 4 experiments were performed showing how U-net performance varies when it has all the filters, half the filters, a quarter of the filters, and an eighth of the filters.
Install python 3.7.6 https://www.python.org/downloads/
Navigate to cloned repo and run this command in the terminal: pip install -r requirements.txt
https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/execute.html
To see the performance of our trained models check out the demo in the demo folder: demo.ipynb
NOTE* The dataset is ONLY available by request--a request may take several days to process. Follow the instructions here to request the data: https://www.med.upenn.edu/cbica/brats2019/registration.html
Before running the demo, you will need to download the data, place it at the same level in the file structure as the cloned repo
Run the jupyter notebook named normalize_and_save_all_data.ipynb to save the normalized data.
This should produce a file structure like:
|
|
Once the previous steps are accomplished, open demo.ipynb and run all cells in the notebook.
Follow steps 1 & 2 in Option B.
There are 4 experiments to run. NOTE* each experiment may take 10+ hours to run.
- Run all the cells in the jupyter notebook called experiment_ds_1.ipynb
- Run all the cells in the jupyter notebook called experiment_ds_2.ipynb
- Run all the cells in the jupyter notebook called experiment_ds_4.ipynb
- Run all the cells in the jupyter notebook called experiment_ds_8.ipynb
Replace the trained models and weights in the saved_models folder with the new ones generated in the main directory.
ds stands for the factor by which we are dividing the number of filters in each convolutional layer in U-net. Thus, ds_1 is the full U-net, ds_2 has half the filters, ds_4 has a quarter of the filters, and ds_8 has an eighth of the filters.
Once the previous steps are accomplished, open demo.ipynb and run all cells in the notebook.