Skip to content
/ IDA Public

Official code for our SIBGRAPI 2020 paper: "IDA: Improved Data Augmentation Applied to Salient Object Detection"

License

Notifications You must be signed in to change notification settings

VRI-UFPR/IDA

Repository files navigation

IDA: Improved Data Augmentation Applied to Salient Object Detection

Official code for the SIBGRAPI 2020 paper "IDA: Improved Data Augmentation Applied to Salient Object Detection"

Our previous method is available at https://github.com/VRI-UFPR/ANDA

Getting started

Since this repository use a submodule to clone it we recommend running:

git clone --recurse-submodules https://github.com/VRI-UFPR/IDA.git

Requirements

We recommend the use of conda alternatively miniconda for python environment management. Run the create_env.sh script for the necessary environments.

The Res2Net-PoolNet is only necessary if you wish to replicate the experiments performed on our paper.

Pretrained models

For the pretrained models of DeepFillv2 check DeepFillv2 Official Code

STEP BY STEP USAGE:

You can use your own dataset but as an example we suggest running the following example

The script is intended as an example of all the steps required. It will download the duts-tr dataset extract, prepare the folders, create the path file mapping paths_input_mask_output.txt and paths_input_mask.txt, create the two conda environments genInpaint and ida and install the necessary packages in each. It will check if you have the pretrained model for the DeepFillv2, if you already downloaded it and put it in generative_inpainting/model_logs the script will procceed and generate the Inpainted images for the duts-tr dataset at DUTS-TR/DUTS-TR-Inpainted. Then, the computeKnn.py will run, which accepts input parametrization, the computation of the features will take a while and the proper background_mapping will be created at generate_samples. Finally the ida.py will run and the created samples will be available at generate_samples/output/

  • (Optional) you can run python generate_samples/computeKnn.py --help to check the input parameterization.
  • (Optional) you can run python generate_samples/ida.py --help to check the input parameterization.

Special thanks

We would like to thank the authors of Generative Image Inpainting with Contextual Attention for the code of DeepFillv2 used in our work and the authors of Res2Net: A New Multi-scale Backbone Architecture for the Res2Net-PoolNet implementation.

Citing

If you found this code useful for your research, please cite:

@INPROCEEDINGS{ruiz2020ida,
  author={D. V. {Ruiz} and B. A. {Krinski} and E. {Todt}},
  booktitle={2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)}, 
  title={IDA: Improved Data Augmentation Applied to Salient Object Detection}, 
  year={2020},
  volume={},
  number={},
  pages={210-217},
  doi={10.1109/SIBGRAPI51738.2020.00036}
}

DISCLAIMER:

This is a research code, so compatibility issues might happen. This repository contain the following submodule: DeepFillv2 Official Code

About

Official code for our SIBGRAPI 2020 paper: "IDA: Improved Data Augmentation Applied to Salient Object Detection"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published