Tensorflow implementation of a 3D-CNN U-net with Grid Attention and DSV for pancreas segmentation from CT.
Encoder part of the network kept intact, generation part has been removed. At the bottom of the network binary-classification part has been added.
(Roadmap: add another classification-output to the segmentation network, which helps enrich learned features with additional cancer/non-cancer information)
- Classic U-net with residual connections
- Grid Attention gave a biggest boost in the performance
- DSV forces intermediate feature-maps to be semantically discriminative
Network has been trained on publicly accessible dataset CT-82 from TCIA (64/16 split between training/validation)
Weighted DSC (Dice Similarity Coefficient) used as a loss function. Best weight hyperparameter served to my purposes selected as 7.
- recall ~95%
- precision ~57%
- DCS/F1 ~72% (though it is not really important for my experiments)
Here is the Tensorboard.dev comparison between weight hyperparameter with values: 1, 7, 10. I recommend to enable only validation
runs and apply filter tag as follow f1|recall|prec
Whole network has been trained end-to-end, w/o any tiling. Reasoning is to avoid artifacts where pancreas segmentation cut to a tile edge.
Every CT downscaled to dimensionality 160x160x160, this is the maximum size that fits into TeslaK40m (12GB RAM). Pooling implemented over WxH dimensions only, D (depth) keeps constant (ie 160 over the whole network), this helps a little with segmentation recovery. Single CT in a training batch, therefore BatchNormalization was not in use.
Optimization algoritm Adam with start learning rate 0.002 then reduce on plateau by 0.1 over 30 epochs. Total number of epochs restricted to 1000.
Training took ~60 hours on a single server with a single GPU NVIDIA TeslaK40m. Most of the progress achieved in first 3-5 hours.
tensorflow container used as a runtime environment:
docker run --rm -it tensorflow/tensorflow:2.3.2-gpu /bin/bash
github clone https://github.com/IvanKuchin/pancreas_segmentation.git
cd panreas_segmentation
python train_segmentation.py
Segmentation learned weights are available here due to GitHub limitation on big files.
Classification weights available on HuggingFace
Pre-requisite: tensorflow 2.3 (you could try latest version, but no guarantee that it will work)
Inference can be done on a regular laptop without any GPU installed. Time required for inference ~10-15 seconds.
To test segmentation on your data
- Clone this repository
github clone https://github.com/IvanKuchin/pancreas_segmentation.git
- Create
predict
folder in cloned folder and put there single pass CT. If it will contain multiple passes result is unpredictable. - Download
weights.hdf5
from the link above and put it in the root of cloned folder python src/pancreas_ai/bin/predict_segmentation.py
Output will be prediction.nii
which Neuroimaging Informatics Technology Initiative
All magic happening in last three lines
if __name__ == "__main__":
pred = Predict()
pred.main("predict", "prediction.nii")
I used 3DSlicer to check the results visually.
Network has been trained on CT-82 with every scan is contrast-free. The network should recognize similar scans to CT-82 probability distribution.
I've tried to test input CT with contrast, result was unsatisfied.
Video recording of segmentation results posted on connme.ru in a group Pancreas cancer detection
Example of prediction in 3DSlicer (prediction: green, ground truth: red)
All information about training/metrics/results as well as trained weights are on the model card
Temporarily in classification part we use TotalSegmentor due to it is better capability to segment CT from different scaners, rather than our training set limited to a single one.
Later we will switch to our model, this will significantly save on inference time.
- Install python >= 3.12
- (Optional) Create virtual environment:
python -m venv .venv
- (Optional) Activate virtual environment: .venv/Scripts/activate
- Install pancreas_ai:
pip install git+https://github.com/IvanKuchin/pancreas_segmentation totalsegmentator
- Create folder
checkpoints
- Download latest version of weights.keras
- Create folder predict
mkdir predict
- Copy a single patient dcim CTs into the
predict
folder - Run the inference:
predict
- Install docker
- Run any terminal. It is required to get the prediction probability
- Place a single CT scan in dicom-format into a folder
- CPU:
docker run -it --rm -v <path to a CT folder>:/app/perdict ikuchin063/pancreas_segmentation
(very slow: 10-15 mins) - GPU:
docker run --gpus 'device=0' -it --rm -v <path to a CT folder>:/app/perdict ikuchin063/pancreas_segmentation
(requires NVIDIA GPU) - Final line in the container output is the probability of having cancer. (0 - cancer-free, 1 - positive)
Container size is huge (~21 GB). It will take sometime to pull it from registry.
- Install docker
- Run any terminal. It is required to get the prediction probability
- Build container:
docker build https://github.com/IvanKuchin/pancreas_segmentation.git -f docker/Dockerfile -t pancreas_ai
- Place a single CT scan in dicom-format into a folder
- CPU:
docker run -it --rm -v <path to a CT folder>:/app/perdict pancreas_ai
(very slow: 10-15 mins) - GPU:
docker run --gpus 'device=0' -it --rm -v <path to a CT folder>:/app/perdict pancreas_ai
(requires NVIDIA GPU) - Final line in the container output is the probability of having cancer. (0 - cancer-free, 1 - positive)