This repository is the official implementation of the "Embed Me If You Can: A Geometric Perceptron" ICCV 2021 paper.
(best viewed as an image in a new tab)
📋We achieved the original results with Python 3.6.5,
torch==1.2.0+cu92
,scikit-learn==0.19.1
,scipy==1.4.1
,numpy==1.15.0
, andmatplotlib==3.0.3
, but we needed to relax the requirements to facilitate the installation.
To install the requirements, run:
pip install -r requirements.txt
The mlgp_demo.ipynb
notebook demonstrates the training and evaluation of our MLGP model and the analysis and visualization of its hidden units.
To train the model(s) in the paper, run the following command:
python train.py
📋Uncomment specific lines in
train.py
to use various models described in the paper (default are original hyperparameters). Adjust theget_tetris_data
function arguments accordingly.
To evaluate one of the trained models on the corresponding test dataset, run:
python eval.py
📋 Depending on the choice of a trained model, modify the
MODEL_PATH
variable and thecreate_test_set
function arguments in theeval.py
script (examples are provided).
You can find the pre-trained models in the pretrained_models
folder.
The performances of the models on the test data and in all experiments are presented in Table 1.
📋Use
train.py
script to train the models with the provided seeds. Useeval.py
to evaluate the models on the corresponding test sets.
@InProceedings{Melnyk_2021_ICCV,
author = {Melnyk, Pavlo and Felsberg, Michael and Wadenb\"ack, M\r{a}rten},
title = {Embed Me if You Can: A Geometric Perceptron},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {1276-1284}
}