https://github.com/BrainSegmentation/tissue-parts-detection
Detecting brain tissue and magnet parts using Mask R-CNN.
When cloning the repo, use recursive method in order to download data from submodule as well:
$ git clone https://github.com/BrainSegmentation/tissue-parts-detection.git --recursive
Cloud used for Machine Learning pipline is Paperspace.
The machine used had following characteristics:
- RAM: 30 GB
- CPUs: 8
- HD: 100 GB
- GPU: 8 GB
Docker image used is nvidia-docker, and sometimes waleedka/modern-deep-learning .
Following command will mount the files from local machine location (left ~/Documents
) to the docker one (right ~/Documents
):
sudo nvidia-docker run -it -v ~/Documents:/Documents -p 8888:8888 brainsegmentation bash
All the scripts are in notebooks
- brain.py : File used to configure the training and to launch train/detection using MaskRCNN
- gen_artificial_images.py : Generate artificial images (from extracted images) (v1)
- artificialPatchGenerator.py : Generate artificial images (from json) (v2)
And several notebooks
- inspect_data : Observe the data provided to train the model
- inspect_braintissue_model : Observe the model and the different steps with Mask RCNN
Several notebooks used or to see the evolution of the data analysis and the generation of artificial images
You have several options to run (all should include the Mask RCNN folder) and go to samples/braintissue
First using the notebooks to observe manually the outputs of the algorithms :
Using the brain.py file :
By detecting on the test set : stage1_test folder in datasets/brainseg
$python brain.py detect --dataset=/path/to/dataset --subset=stage1_test --weights=last
Train a new model using train dataset starting from specific weights file:
$python brain.py train --dataset=/path/to/dataset --subset=train --weights=/path/to/weights.h5
Train a new model starting from specific weights file using the full stage1_train
dataset:
$python brain.py train --dataset=/path/to/dataset --subset=stage1_train --weights=/path/to/weights.h5
Resume training a model that you had trained earlier:
$python brain.py train --dataset=/path/to/dataset --subset=train --weights=last
If you want to fully reimplement the training of the Brain segmentation :
Download the training/test set (links below)
Download the Resnet50 weights (links below) (to use it as backbone)
Go to our MaskRCNN fork (using the git clone recursive)
Go to samples/braintissue, and in datasets/brainseg put all the training images in stage1_train, and all the test images in stage1_test
Then edit brain.py to set your own parameters, and especially the list of Validation Test images (if you have new images)
Then run : $python brain.py train --dataset=datasets/brainseg --subset=train --weights=path_to_resnet50 weights
If you want to use the trained model and train a more precise model based on this :
Get the data (training/test set) as previously
Download the brainseg.h5 weights
Put your new images in stage1_train as well
Do not forget to include some new images in the validation images list in brain.py
Modify brain.py : in the function train : Uncomment the lines to train on the 'heads' layer and comment the lines to train of all the layers
Set your parameters
Then run : $python brain.py train --dataset=datasets/brainseg --subset=train --weights=path_to_brainseg_epoch weights
Inside docker container, run Jupyter Notebook server with following command:
$jupyter notebook --allow-root
Open the link in your local computer:
$ipaddress:8888
https://mega.nz/#!amo2hYTQ!p0QQQCUAaBEAAhcQ7S6VGyHXEL_66J32FL-vKzF5zKA
https://mega.nz/#!3mRTzKbB!rEpygnbG0WGdEysMNa8ULzcuu_AsfuM8PI2SHZs9F0w
https://mega.nz/#!KzBXGC6Q!Sae8SI-7kjzGY3L5IdF7A9KQrcSxSByj8-bCKMjzm4M
https://mega.nz/#!n3oSyCiD!yJ4rbm5hgNGH-MgoRQTPs2cn8q3yY6PbiliOWON32kc
GitHub Organization project BrainSegmentation/tissue-parts-detection
Github Organization initial dataset BrainSegmentation/section-segmentation-dataset
Training set : https://mega.nz/#!JCpw3IAb!2j91l1G2n5EbvPd3XkEZZNqA1R2VytwXhUrUYTGlm7k
Test set : https://mega.nz/#!obgCUAiQ!ebwNPfEdWKcFBFDvX2KP1gFtAjZH2OW3HSXlwwzppG4
- Create Artificial Images
- Create Crop Inference
- Create Extract
- Create Stage1 Data
- Crop Images
- Generate 3-channel Images
- Inspect Data
Deadline: 19.11.
- Reading materials related to Tissue Segmentation and Mask R-CNN
- Diving into the data labeling
- Making training (validation) and testing datasets?
Deadline: 26.11.
- Explore existing projects and getting the overview of the results
- Creation of section (mag + brain parts) coordinates (txt file)
- Rotation Algo : Different Angles and Center of Rotation
- First trial : Apply segmentation modesl on our data
- Create Docker Image (Tensorflow / MAsk RCNN)
Deadline: 3.12.
- From these sections : Create straight boxes of 1 section per boxes (+margin) --> For Detection Model beginning (Friday)
- Saturation algo (create different images by changing color : darker/lighter)
- Create/clean full dataset of boxes for Detection Model
- Create Detection Model
- Binary representation of fluorescent images of magnetic part (white on magnetic part)
Deadline: 10.12.
- Another iteration of Machine Learning pipeline
- Implementing GUI for proofreading (webpage)
- Code documentation
Deadline: 17.12.
- Report
- Code documentation (final touches)