Skip to content

[ECCV 2024] Edge-Guided Fusion and Motion Augmentation for Event-Image Stereo

License

Notifications You must be signed in to change notification settings

21suiyueran/EGEI-Stereo

Repository files navigation

Edge-Guided Fusion and Motion Augmentation for Event-Image Stereo

This repository contains the source code for our ECCV 2024 paper:

Edge-Guided Fusion and Motion Augmentation for Event-Image Stereo

🔧 Requirements

The code has been tested with PyTorch 1.11 and Cuda 11.3

conda env create -f environment.yaml
conda activate egeistereo

💾 Required Data


  • For your convenience, we prepare an download link with the expected directory structure. Please download and unzip it to the current directory.

By default, stereo_datasets.py will search for the dataset in the following locations.

├── MVSEC
    ├── indoor_flying_1
        ├── disparity_image
        ├── event0
        ├── event1
        ├── image0
        ├── image1
        ├── timestamps.txt
    ├── indoor_flying_2
        └── ...
    ├── indoor_flying_3
        └── ...

🤖 Demo

The pre-trained models are in the pre-trained folder.

You can demo a trained model on the MVSEC dataset. To predict stereo for split 1, run

python demo.py --path MVSEC --restore_ckpt pre-trained/EGEI-stereo_split1.pth --split 1 --mixed_precision --mode demo

Or for split 3:

python demo.py --path MVSEC --restore_ckpt pre-trained/EGEI-stereo_split3.pth --split 3 --mixed_precision --mode demo

The visualization results will be saved in the demo_visualization folder.

💻 Evaluation

To evaluate a trained model on the test set for split 1, run

python evaluate_stereo.py --path MVSEC --restore_ckpt pre-trained/EGEI-stereo_split1.pth --split 1 --mixed_precision --mode test

Or for split 3:

python evaluate_stereo.py --path MVSEC --restore_ckpt pre-trained/EGEI-stereo_split3.pth --split 3 --mixed_precision --mode test

🚀 Training

Our model is trained on a single NVIDIA RTX 3080Ti GPU using the following command. Training logs will be written to runs/ which can be visualized using tensorboard. For split 1:

python train_stereo.py --path MVSEC --split 1 --train_iters 12 --valid_iters 12 --mixed_precision --mode train

For split 3:

python train_stereo.py --path MVSEC --split 3 --train_iters 12 --valid_iters 12 --mixed_precision --mode train

🎓 Citation

If you find this code useful for your research, please consider citing our paper:

@inproceedings{EGEI-Stereo,
  title={Edge-Guided Fusion and Motion Augmentation for Event-Image Stereo},
  author={Zhao, Fengan and Zhou, Qianang and Xiong, Junlin},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2024}
}

💡 Acknowledgement

Thanks to the inspirations and codes from the following excellent open-source projects: RAFT-Stereo, TEED, EFNet, SCSNet

About

[ECCV 2024] Edge-Guided Fusion and Motion Augmentation for Event-Image Stereo

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages