Skip to content
/ UNAD Public

Official implementation of UNAD: Universal Anatomy-initialized Noise Distribution Learning Framework Towards Low-dose CT Denoising

License

Notifications You must be signed in to change notification settings

Nioolek/UNAD

Repository files navigation

 
UNAD: Universal Anatomy-initialized Noise Distribution Learning Framework Towards Low-dose CT Denoising    
 

Official repository for UNAD: Universal Anatomy-initialized Noise Distribution Learning Framework Towards Low-dose CT Denoising (accepted by ICASSP2024). UNAD have achieved state-of-the-art results even without a powerful feature extractor like ViT.

🛠️ Installation

UNAD relies on Pytorch and Python 3.6+. To install the required packages, run:

conda create -n unad python=3.8 pytorch==1.10.1 torchvision==0.11.2 cudatoolkit=11.3 -c pytorch -y
conda activate unad
pip install -r requirements.txt

📖 Dataset Preparation

We use the same data processing methods as REDCNN. Please download the 2016 NIH-AAPM-Mayo Clinic Low Dose CTGrand Challenge by Mayo Clinic dataset. Next, execute the following command to prepare and convert the dataset.

python prep.py    # Convert dicom file to npy file

📦 Testing

To replicate our state-of-the-art UNAD results, you can download the model weight from either, you can download the model weight from either Google Drive or Baidu Drive (password:unad), and place it to ./work_dirs/unad/ directory. Then, execute the following command to test UNAD.

python main.py config/unad.yaml --test --test_iters 43000

The model metrics results will be printed in the console. The visualization images of the prediction results will be saved in ./work_dirs/unad/fig.

🕹️ Pre-training and Training

The UNAD model training process comprises two phases: pre-training and actual training.

Note: The results in the research paper were obtained from a single GPU. If employing multiple GPUs, a few parameters may need fine-tuning.

📚 Pre-training

For single-GPU pretraining, run:

python main_pretrain.py config/unad_pretrain.yaml

For multi-GPU pretraining, run:

bash ./dist_train.sh config/unad_pretrain.yaml ${GPU_NUM} 

🧩 Training

After completion of the pre-training phase, you will need to update the pretrain_path variable located in config/unad.yaml to the path of the weights saved during the final epoch of the pre-training phase. Subsequently, run the command below to train UNAD.

For single-GPU training, run:

python main.py config/unad.yaml

For multi-GPU training, run:

bash ./dist_train.sh config/unad.yaml ${GPU_NUM} 

🖊️ Citation

If you find this paper useful in your research, please cite our paper:

@INPROCEEDINGS{10446919,
  author={Gu, Lingrui and Deng, Weijian and Wang, Guoli},
  booktitle={ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  title={UNAD: Universal Anatomy-Initialized Noise Distribution Learning Framework Towards Low-Dose CT Denoising}, 
  year={2024},
  volume={},
  number={},
  pages={1671-1675},
  keywords={Computed tomography;Source coding;Noise reduction;Signal processing algorithms;Signal processing;Network architecture;Feature extraction;CT denoising;Deep learning;Pre-training;Distribution Representations;Low-dose CT},
  doi={10.1109/ICASSP48485.2024.10446919}}

About

Official implementation of UNAD: Universal Anatomy-initialized Noise Distribution Learning Framework Towards Low-dose CT Denoising

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published