Skip to content

Latest commit

 

History

History
90 lines (65 loc) · 2.6 KB

README.md

File metadata and controls

90 lines (65 loc) · 2.6 KB

TwinLiteNet ONNX Model Inference with ONNX Runtime

This repository includes a C++ implementation for performing inference with the state-of-the-art TwinLiteNet model using ONNX Runtime. TwinLiteNet is a cutting-edge lane detection and drivable area segmentation model. This implementation provides support for both CUDA and CPU inference through build options.

Image 1 Image 2 Image 3
Image 4 Image 5 Image 6

Acknowledgment 🌟

I would like to express sincere gratitude to the creators of the TwinLiteNet model for their remarkable work .Their open-source contribution has had a profound impact on the community and has paved the way for numerous applications in autonomous driving, robotics, and beyond.Thank you for your exceptional work.

Project Structure

The project has the following structure:


├── CMakeLists.txt
├── LICENSE
├── README.md
├── assets/
├── images/
├── include/
│   └── twinlitenet_onnxruntime.hpp
├── models/
│   └── best.onnx
└── src/
    ├── main.cpp
    └── twinlitenet_onnxruntime.cpp

Requirements


Build Options

  • CUDA Inference: To enable CUDA support for GPU acceleration, build with the -DENABLE_CUDA=ON CMake option.
  • CPU Inference: For CPU-based inference, no additional options are required.

Usage

  1. Clone this repository.
  2. Build the project using CMake with your preferred build options.
mkdir build
cd build
cmake  -DENABLE_CUDA=ON ..
make -j8
  1. Execute ./main and Enjoy accurate lane detection and drivable area results!

License

This project is licensed under the MIT License. Feel free to use it in both open-source and commercial applications.

Extras