This repository contains code from the paper "Smart-Tree: Neural Medial Axis Approximation of Point Clouds for 3D Tree Skeletonization".
The code provided is a deep-learning-based skeletonization method for point clouds.
Input point cloud. | Mesh output. | Skeleton output. |
Please follow instructions to download data from this link.
First, make sure you have Conda installed, aswell as mamba. This will ensure the enviroment is created within a resonable timeframe.
To install smart-tree please use bash create-env.sh
Then activate the environment using: conda activate smart-tree
To train the model open smart_tree/conf/training.yaml.
You will need to update (alternatively these can be overwritten with hydra):
- training.dataset.json_path to the location of where your smart_tree/conf/tree-split.json is stored.
- training.dataset.directory to the location of where you downloaded the data (you can choose whether to train on the data with foliage or without based on the directory you supply).
You can experiment with/adjust hyper-parameter settings too.
The model will then train using the following:
train-smart-tree
The best model weights and model will be stored in the generated outputs directory.
We supply two different models with weights:
noble-elevator-58
contains branch/foliage segmentation.peach-forest-65
is only trained on points from the branching structure.
If you wish to run smart-tree using your own weights you will need to update the model paths in the tree-dataset.yaml
.
To run smart-tree use:
run-smart-tree +path=cloud_path
where cloud_path
is the path of the point cloud you want to skeletonize.
Skeletonization-specific parameters can be adjusted within the smart_tree/conf/tree-dataset.yaml
config.
Please use the following BibTeX entry to cite our work:
@inproceedings{dobbs2023smart,
title={Smart-Tree: Neural Medial Axis Approximation of Point Clouds for 3D Tree Skeletonization},
author={Dobbs, Harry and Batchelor, Oliver and Green, Richard and Atlas, James},
booktitle={Iberian Conference on Pattern Recognition and Image Analysis},
pages={351--362},
year={2023},
organization={Springer}
}
Should you have any questions, comments or suggestions please use the following contact details: harry.dobbs@pg.canterbury.ac.nz