Skip to content

Latest commit

 

History

History
90 lines (70 loc) · 3.17 KB

INSTALL.md

File metadata and controls

90 lines (70 loc) · 3.17 KB

Installation

This repo was tested with Python 3.6, PyTorch 1.1.0, and CUDA 9.0. But it should be runnable with recent PyTorch versions >=1.0.0. (0.4.x may be also ok)

python setup.py develop # OR python setup.py install

Preparation

Datasets

Currently, we support Pittsburgh, Tokyo 24/7 and Tokyo Time Machine datasets. The access of the above datasets can be found here.

cd examples && mkdir data

Download the raw datasets and then unzip them under the directory like

examples/data
├── pitts
│   ├── raw
│   │   ├── pitts250k_test.mat
│   │   ├── pitts250k_train.mat
│   │   ├── pitts250k_val.mat
│   │   ├── pitts30k_test.mat
│   │   ├── pitts30k_train.mat
│   │   ├── pitts30k_val.mat
│   └── └── Pittsburgh/
└── tokyo
    ├── raw
    │   ├── tokyo247/
    │   ├── tokyo247.mat
    │   ├── tokyoTM/
    │   ├── tokyoTM_train.mat
    └── └── tokyoTM_val.mat

Use Custom Dataset (Optional)

  1. Download your own dataset and save it under
examples/data/my_dataset
  └── raw/ # save the images here
  1. Define your own dataset following the template, and save it under ibl/datasets/, e.g. ibl/datasets/my_dataset.py.

  2. Register it in ibl/datasets/__init__.py, e.g.

from .my_dataset import MyDataset # MyDataset is the class name
__factory = {
    'my_dataset': MyDataset,
}
  1. (Optional) Read it by
from ibl.datasets import create
dataset = create('my_dataset', 'examples/data/my_dataset') # you can use this command for debugging
  1. Use it for training/testing by adding args of -d my_dataset in the scripts.

Pre-trained Weights

mkdir logs && cd logs

After preparing the pre-trained weights, the file tree should be

logs
├── vd16_offtheshelf_conv5_3_max.pth # refer to (1)
└── vgg16_pitts_64_desc_cen.hdf5 # refer to (2)

(1) imageNet-pretrained weights for VGG16 backbone from MatConvNet

The official repos of NetVLAD and SARE are based on MatConvNet. To reproduce their results, we need to load the same pretrained weights. Directly download from Google Drive and save it under the path of logs/.

(2) initial cluster centers for VLAD layer

Note: it is important as the VLAD layer cannot work with random initialization.

The original cluster centers provided by NetVLAD are highly recommended. You could directly download from Google Drive and save it under the path of logs/.

Or you could compute the centers by running the script

./scripts/cluster.sh vgg16