Skip to content

VEGAN: Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects

Notifications You must be signed in to change notification settings

timy90022/VEGAN

Repository files navigation

VEGAN: Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects

A clean and readable Pytorch implementation of VEGAN (https://arxiv.org/abs/1812.08442)

Prerequisites

Code is intended to work with Python 3.6.x, it hasn't been tested with previous versions

Follow the instructions in pytorch.org for your current setup

Training

1. Setup the dataset

First, you will need to download and setup a dataset. Recommended using MSRA10K dataset. Unzip the file and put it in the datasets folder.

mkdir datasets

2. Train!

python train.py --cuda

This command will start a training session using the images MSRA10K under the ./datasets/ directory. You are free to change those hyperparameters.

If you don't own a GPU remove the --cuda option, although I advise you to get one!

There are three types of visual effect to choose. (black-background, color-selectivo, defocus).

python train.py --visual_effect color-selectivo --cuda

3. Result

Examples of the generated outputs (default params, MSRA10K dataset):

Input Image --> Output mask --> Output Image --> Ground truth Image

Example_1

Example_1

About

VEGAN: Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages