Skip to content

implement of "Multimodal Information Interaction for Medical Image Segmentation."

Notifications You must be signed in to change notification settings

Yiiiike/MICFormer

 
 

Repository files navigation

Multimodal Information Interaction for Medical Image Segmentation.

Welcome to reproduce our code!!!

Our article is now publicly available on ArXiv([2404.16371] Multimodal Information Interaction for Medical Image Segmentation (arxiv.org)). This repository provides the training code for the MM-WHS dataset. If you need to reproduce our results, you can use the training scripts provided in this repository.

Dataset

The dataset used in this paper is the MM-WHS dataset, which can be found at Multi-Modality Whole Heart Segmentation Challenge. Additionally, the data preprocessing method used in this paper can be performed through the registration method described in the text.

We also provided our dataset processing script in prepocess.py, which we can run by changing the file path to get the same data as the article.

python prepocess.py

Run

In addition to providing the MicFormer code, this repository also includes training and testing code for state-of-the-art methods. These include VT-Unet, Swin-Unet, SwinUneter, nnFormer, and MedNeXt.

Citations

@misc{fan2024multimodal,
      title={Multimodal Information Interaction for Medical Image Segmentation}, 
      author={Xinxin Fan and Lin Liu and Haoran Zhang},
      year={2024},
      eprint={2404.16371},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

implement of "Multimodal Information Interaction for Medical Image Segmentation."

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 85.6%
  • Jupyter Notebook 14.4%