This repository provides the code and models files for multi-organ segmentation in abdominal CT using cascaded 3D U-Net models. The models are described in:
"Hierarchical 3D fully convolutional networks for multi-organ segmentation" Holger R. Roth, Hirohisa Oda, Yuichiro Hayashi, Masahiro Oda, Natsuki Shimizu, Michitaka Fujiwara, Kazunari Misawa, Kensaku Mori https://arxiv.org/abs/1704.06382
This work is based on the open-source implementation of 3D U-Net: https://lmb.informatik.uni-freiburg.de/resources/opensource/unet.en.html We thank the authors for providing their implementation.
Olaf Ronneberger, Philipp Fischer & Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS, Vol.9351, 234--241, 2015 DOI Code and Özgün Çiçek, Ahmed Abdulkadir, S. Lienkamp, Thomas Brox & Olaf Ronneberger. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS, Vol.9901, 424--432, Oct 2016
3D U-Net is based on Caffe. To compile, follow the Caffe instructions: http://caffe.berkeleyvision.org/installation.html#prequequisites
To run the segmentation algorithm on a new case use: python run_full_cascade_deploy.py Note, please update the paths in run_full_cascade_deploy.py
You might have to add a -2000 offset to win_min/max1/2 in deploy_cascade.py if your images are in Hounsfield units.
For training, please follow the 3D U-Net instruction. prepare_data.py can be useful for converting nifti images and label images to h5 containers which can be read by caffe.
Roth, Holger R., Hirohisa Oda, Xiangrong Zhou, Natsuki Shimizu, Ying Yang, Yuichiro Hayashi, Masahiro Oda, Michitaka Fujiwara, Kazunari Misawa, and Kensaku Mori. "An application of cascaded 3D fully convolutional networks for medical image segmentation." Computerized Medical Imaging and Graphics 66 (2018): 90-99. https://arxiv.org/pdf/1803.05431.pdf
We also provide a model fine-tuned from the abdominal model based on the VISCERAL data set [1]. All related code and models are provided in the "VISCERAL" subfolder. This folder also contains *.sh scripts for fine-tuning the different stages of the cascade. train.sh is for training the model from scratch. The data list files in models/3dUnet_Visceral_with_BN.prototxt need to be updated accordingly. For more details, please refer to VISCERAL/JAMIT2017_rothhr_manuscript.pdf
Please contact Holger Roth (rothhr@mori.m.is.nagoya-u.ac.jp) for any questions.
[1] Jimenez-del-Toro, O., Müller, H., Krenn, M., Gruenberg, K., Taha, A. A., Winterstein, M., et al. Kontokotsios, G. (2016). Cloud-based evaluation of anatomical structure segmentation and landmark detection algorithms: VISCERAL anatomy benchmarks. IEEE Transactions Imaging, 35(11), 2459-2475. (http://www.visceral.eu/benchmarks/anatomy3-open/)