Skip to content

Latest commit

 

History

History
78 lines (59 loc) · 3.36 KB

README.md

File metadata and controls

78 lines (59 loc) · 3.36 KB

TransLandSeg: A Transfer Learning Approach for Landslide Semantic Segmentation Based on Vision Foundation Model

Changhong Hou, Junchuan Yu☨, Daqing Ge, Liu Yang, Laidian Xi, Yunxuan Pang, and Yi Wen

☨corresponding author: yujunchuan@mail.cgs.gov.cn

Refer to our paper for more details.

Updates

  • [2024.3.3] Paper submission.
  • [2024.3.15] Dataset uploaded.
  • [2024.4.20] Code realease.

Dataset

  • Landslide4Sense: contains 3799 training samples.The preprocessing of the Landslide4Sense can refer to the official demo.
  • Bijie Landslide dataset: contains 770 landslide images within Bijie City in northwestern Guizhou Province, China.
    • Save the file in your download directory:
      • /data/{Bijie,Landslide4Sense}/{image,label}
  • You can also directly download our preprocessed data. Baidu Disk Google drive

Code

Structure of the proposed TransLandSeg and Segment Anything Model (SAM)

Click the links below to download the checkpoint for the corresponding model type.

  • ViT-L SAM model: Official Link

    • Save the file in your download directory:
      • /pretrained/sam_vit_l_0b3195.pth
  • TransLandSeg model: Baidu Disk Google drive

    • Save the file in your download directory:
      • /checkpoint/{Bijie.pth.tar,Landslide4Sense.pth.tar}
  • The supporting library information of the code is shown below:
Package Version
GDAL 3.6.2
h5py 3.9.0
matplotlib 3.7.2
numpy 1.24.1
opencv-python 4.8.0.74
scipy 1.10.1
tensorboard 2.10.1
tensorboardX 2.6.2.2
torch 1.12.1
torchsummary 1.5.1
torchvision 0.13.1
tqdm 4.65.0

Acknowledgement

  • SAM. A new vision foundation model from Meta AI.
  • Heywhale. Provided the arithmetic platform for this work.

If you're using TransLandSeg in your research or applications, please cite using this BibTeX:

@article{
  title={TransLandSeg: A Transfer Learning Approach for
Landslide Semantic Segmentation Based on Vision
Foundation Model},
  author={Changhong Hou, Junchuan Yu*, Daqing Ge, Liu Yang, Laidian Xi, Yunxuan Pang, and Yi Wen}
  year={2024}
}