Skip to content

Latest commit

 

History

History
79 lines (59 loc) · 4.99 KB

File metadata and controls

79 lines (59 loc) · 4.99 KB

White Matter Hyperintensities Segmentation

This repository contains an algorithm that can automatically detect and segment white matter hyperintensities by using fluid-attenuated inversion recovery (FLAIR) and T1 magnetic resonance scans. The technique used is based on a deep fully convolutional neural network and ensemble model, which is a machine learning approach for diagnosing diseases through medical imaging. Additionally, this repository includes functions for preprocessing and visualizing MR images.

Installation

To clone the git repository, type the following commands from terminal:

git clone https://github.com/MattRicchi/White-Matter-Hyperintensities-Segmentation.git
cd White-Matter-Hyperintensities-Segmentation
python3 -m pip install -r requirements.txt

This command install also the requirements, that can be also installed before cloning the repository. The requirements are here reported:

numpy==1.23.5
nibabel==5.1.0
tensorflow==2.12.0
focal_loss==0.0.7
cv2==4.7.0
sklearn==1.2.2
pandas==2.0.1
matplotlib==3.7.1
tqdm==4.65.0

The code was written and tested with these version of the packages but it may also work with previous versions (not tested).

How to train the network

To begin the network training, you simply have to execute the training.py script. Please, make sure that the DATABASE folder has the required data and is situated in the same directory as the script. Moreover, the DATABASE folder should have two subfolders:

  • OnlyBrain, which contains brain-extracted images sorted into the following categories:
    • flair with brain-extracted flair images
    • t1w with brain-extracted T1 weighted images
    • label with ground truth images
  • brain which includes brain mask images

If you need to perform brain extraction on your data, you may utilize the fslpy wrapper BET provided by fsl. Here you can find the BET userguide.

The training.py script automatically splits the images into training and testing categories. Images of patients 4, 11, 15, 38, 48, and 59 will be reserved for testing purposes to evaluate the network's performance.

Once the training and testing stages are complete, the ultimate segmented images are stored in the NIfTI format under the same ID as the initial flair image.

How to evaluate network performance

To assess the accuracy of the network's segmentation maps, you can utilize the evaluation.py script. This script calculates various metrics such as Dice Similarity Coefficient, Precision, Recall, and F1 score for each test patient. Additionally, it generates a boxplot illustrating the evaluation metrics for every test patient.

How to plot final segmentation map

If you want to visualize the segmentation maps generated by the network, you can utilize the plot_images.py script. This script will generate a plot containing the original FLAIR image of the wanted slice and the ground truth segmentation map next to the segmentation result given by the network. When you run the script, make sure you also provide the patient and slice number, corresponding to the image you want to display. For example, if you want to plot slice 26 of patient 5 run the following command:

python plot_images.py -p 5 -s 26

Repository structure and content

The main directory includes:

  • training.py script that contains the code to train and test the model;
  • unet.py script with the get_unet() function defining the network used in the ensemble model;
  • evaluate_results.py script to compute the Dice Similarity Coefficient, Precision, Recall and F1 score for each test patient and to plot the boxplot of every evaluation metric;
  • plot_images.py script to display the flair image, ground turth and segmentation result for every test patient;
  • test_pytest.py script containing all test functions.

The General_Functions directory includes three scripts containing the necessary functions to correctly handle with medical images in the NIfTI format, and all the functions for the preprocessing, training and postprocessing stages:

  • Nii_Functions.py
  • image_preprocessing.py
  • Training_Functions.py
  • postprocessing.py

The test_folder contains the images created during the testing of all the functions;

The Report directory contains the .tex files used to write the final report of the project.

Running tests

The tests for all the functions, are contained in the test_pytest.py script. To run them it is necessary to be in the White-Matter-Hyperintensities-Segmentation directory and to have installed the pytest package. Then it is enough to run the pytest command:

pytest

The scripts were written in Python 3.11.0 on Windows 11, and the functions were tested in the same environment.