Skip to content

Installation

Alec Krawciw edited this page Jun 9, 2024 · 11 revisions

Sensor & Robot

We try to design our navigation algorithms to be generalizable to a wide variety of robots and sensors. Have a look at our VT&R3 papers for what sensors and robots we have used. This guide focuses on installing the VT&R3 navigation system, which communicates with sensors and robots via ROS2 interfaces. You are responsible for setting up your own sensors and robots.

Directory Structure

The following environment variables and their respective directories are assumed present. Feel free to change them to your own preference. These should be appended to the end of ~/.bashrc

export VTRROOT=~/ASRL/vtr3         # (INTERNAL default) root directory
export VTRSRC=${VTRROOT}/src       # source code (this repo)
export VTRDATA=${VTRROOT}/data     # datasets
export VTRTEMP=${VTRROOT}/temp     # default output directory
export VTRMODELS=${VTRROOT}/models # .pt models for TorchScript

Make sure these directories exist.

mkdir -p ${VTRSRC} ${VTRDATA} ${VTRTEMP} ${VTRMODELS}

Clone this repository and its submodules into VTRSRC:

git clone --recurse-submodules  git@github.com:utiasASRL/vtr3.git ${VTRSRC}

System Dependencies

This Dockerfile can be used as a reference for installing the system dependencies. The easy way is to create a Docker image from this Dockerfile and then install VT&R3 in a container. Note that you must change the environment variables in the Dockerfile to match your own setup above.

The cuda version should be based on your machine. For newer nvidia drivers, you can run nvidia-smi --query-gpu=compute_cap --format=csv,noheader to query your GPU's capability. For older nvidia drivers, look up your GPU in this link to find the capability. Use the value without the dot. (e.g. "7.5" -> "75") You can build the Docker image using the following example command replacing the 86 with your value as determined using the methods above:

cd ${VTRSRC}
docker build -t vtr3 \
  --build-arg USERID=$(id -u) \
  --build-arg GROUPID=$(id -g) \
  --build-arg USERNAME=$(whoami) \
  --build-arg HOMEDIR=${HOME} \
  --build-arg CUDA_ARCH="86" .

The cuda argument is only required if you are building a cuda version of the Dockerfile.

You can then run the GPU Docker image using the following command:

docker run -it --name vtr3 \
  --privileged \
  --network=host \
  --ipc=host \
  --gpus=all \
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  -v ${VTRROOT}:${VTRROOT}:rw \
  -v /dev:/dev \
  vtr3

If you have built the CPU dockerfile, remove --gpus=all

Building VT&R3

Inside Docker container:

# source the ROS2 workspace
source /opt/ros/humble/setup.bash

# build and install all VTR3 packages
cd ${VTRSRC}/main
colcon build --symlink-install

#If you only want one pipeline, use the env variable
VTR_PIPELINE=VISION colcon build --symlink-install

# build VTR3 web-based GUI
VTRUI=${VTRSRC}/main/src/vtr_gui/vtr_gui/vtr-gui
npm --prefix ${VTRUI} install ${VTRUI}
npm --prefix ${VTRUI} run build