Skip to content

fadamsyah/pytorch_deseg_module

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

82 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyTorch: Detection and Segmentation Module

Introduction

This repository is made as part of my internship at Neurabot. Shortly, the repository is used to detect objects in images in the field of medical imaging and pathology. Croped images then are segmented using an interactive segmentation technique. The object detection model needs data to learn whereas the segmentation model doesn't require any. The main references are:

You can see modifications of the original repositories here and here.

Requirements

albumentations
numpy
opencv-contrib-python
pycocotools
torch
torchvision

Installation

  1. You need to install the requirements first.
  2. Clone this repository git clone --depth 1 https://github.com/fadamsyah/pytorch_deseg_module.git
  3. Install the cloned repository pip install pytorch_deseg_module

Example:

foo@bar:~$ git clone --depth 1 https://github.com/fadamsyah/pytorch_deseg_module.git
foo@bar:~$ pip install pytorch_deseg_module

EfficientDet Training

The pretrained weights are available on the original repository. For training, please refer to the original repository. But, you should pay attention to the augmentation parameter on projects/<project>.yml (examples). You are not encouraged to train an EfficientDet model from scratch unless you have a lot of computing resources and data. From my experience, I only need to do the transfer learning technique to produce a considerably well and robust model.

This is how you should prepare your folder for training a EfficientDet model:

# You are highly recommended to structure your project folder as follows
project/
    datasets/
        {your_project_name}/
            annotations/
                - instances_train.json
                - instances_val.json
                - instances_test.json
            train/
                - *.jpg
            val/
                - *.jpg
            test/
                - *.jpg
    logs/
        # The training history will be written as the tensorboard format
        {your_project_name}/
            tensorboard/
    projects/
        # Put your project description here
        - {your_project_name}.yml
    weights/
        # Parameters of the trained model will be automatically saved
        - *.pth
    efficientdet_train.py

Tips for training:

  1. Augmentation. In my opinion, some of the most useful augmentation schemes in the albumentation library are:

    • Transpose
    • HorizontalFlip
    • VerticalFlip
    • ShiftScaleRotate
    • RandomCrop
    • MotionBlur
    • GaussianBlur
    • OneOf

    Note: You MUST visualize the augmented data to determine whether the augmentation process is relevant to your case. If you add improper augmentations to your training set, the result will be most likely bad.

  2. In the begining of your training process, use head_only=True to train the head without updating the backbone and neck. Then, change to head_only=False after you see a saturation trend at training loss.

  3. You may want to see mnslarcher/kmeans-anchors-ratios to determine optimal anchors ratios and scales.

Inference

Coming soon ...

CLI Examples

# Visualize a sample of dataset
python efficientdet_dataset_viz.py -p ki67 --set_name val --transform False --resize False --idx 0

# Visualize an iog segmentation output from dataset
python iog_dataset_viz.py -p ki67 --set_name test --idx 0 --image_name output.jpg --iog_weights_path weights.pth --iog_obj_idx 0

# A sample training command
python efficientdet_train.py -p ki67 -c 0 --head_only True --lr 5e-4 --weight_decay 1e-5 --batch_size 16 --load_weights weights.pth --num_epochs 20

# Evaluate an EffDet model
python efficientdet_coco_eval.py -p ki67 -c 0 -w weights.pth --set_name val --on_every_class True --cuda True

# A sample inference
python inference.py --project ki67 --img_path image.jpg --use_cuda True --det_compound_coef 0 --det_weights_path effdet_weights.pth --iog_weights_path iog_weights.pth

TODO

  • Add a code to visualize object detection dataset.
  • Add a code to visualize iog segmentation from dataset.
  • Save the last parameters & the best parameters when training efficientdet_train.py.
  • Generalize the IoGNetwork for multi-class segmentation.
  • Use the PyTorch dataloader on IoGNetwork to specify the batch_size when inferencing.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published