This repo contains code for training and inference of our model for connected polygon detection in RGB images. See the paper on arXiv titled "Polygon Detection for Room Layout Estimation using Heterogenous Graphs and Wireframes" for more details.
Conceptual image of how the model works with wireframes and polyons.
git clone --recurse-submodules git@github.com:DavidGillsjo/polygon-HGT.git
alternatively
git clone git@github.com:DavidGillsjo/polygon-HGT.git
git submodule init
git submodule update
To run the code you need an NVIDIA GPU with their driver installed. Furthermore you need to install docker and gpu support for docker with nvidia container toolkit. [Link to docker install] [Link to nvidia container toolkit]
We supply a Dockerfile to build a docker image which can run the code.
First, modify line 7 so that gpu_arch
matches your GPU architecture. See for example this blog post to find your arch code.
Then build and run:
cd docker_cuspatial
./build.sh
./run.sh
You will find your HOME
directory mounted to /host_home
.
If you require sudo
to run docker, set environment variable SUDO
like this
cd docker_cuspatial
SUDO=1 ./build.sh
SUDO=1 ./run.sh
If you have problems building you may use the uploaded dockerhub image by running
cd docker_cuspatial
./dockerhub_run.sh
This is not tested and you may run into issues with the environment variable gpu_arch
.
Possible workaround is to manually set it prior to compiling the code with build.sh
.
Add parsing
folder to python path for correct imports.
source init_env.sh
Here you find the pre-trained models to reproduce the result from the paper.
You may for example put the model weights in the data
folder, the rest of this README will assume you did.
To generate the annotations,
- Download Structured3D, see official website. You may use this script.
- Run
python3 preprocessing/structured3D2planes_simple.py --help
for instructions.
To run the network, some C-code needs compiling.
./build.sh
There are a number of ways to run inference, see python3 scripts/test.py --help
for details.
To run on the test set, do
cd scripts
python3 test.py \
--config-file ../config/Pred-plane-from-GT-GNN.yaml \
CHECKPOINT ../data/model_polygon_hgt_simulated.pth
OUTPUT_DIR ../runs/test
To run on the validation data, add the flag --val
.
To run on a set of images
cd scripts
python3 test.py \
--config-file ../config-files/Pred-simple-plane-S3D-GNN.yaml \
--img-folder <my-image-folder> \
CHECKPOINT ../data/model_polygon_hgt_joint.pth
and the result will be placed in <my-image-folder>/test
.
The different models are trained with their respective config file which exists in both simulated and joint prediction versions.
The config files are placed in the folder config-files
.
Model / Variant | Simulated wireframe | Joint detection |
---|---|---|
Cycle based | Pred-plane-from-GT | Pred-simple-plane-S3D |
Polygon-HGT | Pred-plane-from-GT-GNN | Pred-simple-plane-S3D-GNN |
cd scripts
python3 train.py \
--config-file ../config-files/Pred-plane-from-GT-GNN.yaml
To monitor the training we have used W&B, but there is some support for tensorboard. W&B may be configured here.
If you use it in your research, please cite
@misc{gillsjö2023polygon,
title={Polygon Detection for Room Layout Estimation using Heterogeneous Graphs and Wireframes},
author={David Gillsjö and Gabrielle Flood and Kalle Åström},
year={2023},
eprint={2306.12203},
archivePrefix={arXiv},
primaryClass={cs.CV}
}