Skip to content

Latest commit

 

History

History
66 lines (44 loc) · 2.8 KB

readme.md

File metadata and controls

66 lines (44 loc) · 2.8 KB

Formula Student Driverless inference with ZED and ONNX

This project uses a EfficientDet-D0 model exported to ONNX and a ZED stereo camera to detect and range traffic cones. The model is a self trained EfficientDet-D0 architecture with no other optimizations but anchor sizes. It is only for showcasing and debugging.

Performance

The performance depends heavily on the execution provider and the used hardware. With a RTX2060 the inference reaches roughly 55 ± 5Hz while a Jetson NX Xavier only reaches about 30Hz. This can probably be improved with the ONNX-TensorRT provider or through discarding ONNX completely and switching to TensorRT. Here is surely some room for optimization.

Installation & Setup

To run this code one obviously needs ONNX and the ZED SDK but also OpenCV, CUDA & CUDNN. In the following my installation process is briefly described. Others may work too or mine may not, depending on your existing environment.

An example .svo file can be downloaded from here: https://drive.google.com/file/d/1oHrZGJ2r6h4mJaQN11YjG5cwEa9o6X1p/view?usp=sharing

Build Opencv:

  • Install the following packages if missing:
    • sudo apt install g++ cmake make git libgtk2.0-dev pkg-config
  • Download OpenCV source code:
  • Create build directory...
    • mkdir -p build && cd build
  • .. and build:
    • cmake ../opencv
    • make -j16 (or your core count)
    • sudo make install

Install CUDA & CUDNN:

Make sure to use the right versions for your preferred version of the ZED SDK.

Install ZED SDK:

Build ONNX Runtime:

  • Install CUDA and CUDNN
  • Clone onnxruntime https://github.com/microsoft/onnxruntime
    • Important: clone from release commit. E.g. https://github.com/microsoft/onnxruntime/tree/v1.13.1
  • build with:
  • ./build.sh --use_cuda --cudnn_home <CUDNN HOME PATH> --cuda_home <CUDA PATH> --parallel --build_shared_lib --config=Release
    • <CUDNN HOME PATH> is probably something like: /usr/lib/x86_64-linux-gnu
    • <CUDA PATH> is probably something like: /usr/local/cuda-11.7
    • parallel makes compiling faster. If OOM occurs exclude it.
  • Set the ONNXRUNTIME_ROOT_PATH path in CMakeLists.txt to your build directory.

For further information you can look here: https://onnxruntime.ai/docs/build/eps.html#cuda