This repo holds code for Self-supervised Pre-training for Nuclei Segmentation
Download "R50-ViT-B_16" from https://console.cloud.google.com/storage/vit_models/. Put the downloaded model weights file at "model/vit_checkpoint/imagenet21k/".
- Python 3.6.13
- PyTorch 1.10.2
- Download the *.svs files (which are listed in wsi_list.txt) from https://portal.gdc.cancer.gov/. These are the Whole Slide Images (WSI).
- Put the downloaded WSIs at data/MoNuSeg_WSI/
- Then, run "preprocessing/make_tiles.py" to extract patches from the downloaded WSIs. This will save the extracted patches in "monuseg_tiles_512x512" folder.
- Run "MoNuSeg_dataset_builder.py"
- Download TNBC dataset
- Put the downloaded dataset at "data/zenodo/"
- Split the images and masks into "train_images", "train_masks", "validation_images", "validation_masks", "test_images", and "test_masks" folder.
CUDA_VISIBLE_DEVICES=0 python train.py --root_path data --batch_size 2 --vit_name R50-ViT-B_16
python test.py --vit_name R50-ViT-B_16
python evaluate.py
@inproceedings{haq2022self,
title={Self-supervised Pre-training for Nuclei Segmentation},
author={Haq, Mohammad Minhazul and Huang, Junzhou},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={303--313},
year={2022},
organization={Springer}
}