Skip to content

Latest commit

 

History

History
74 lines (51 loc) · 1.98 KB

README.md

File metadata and controls

74 lines (51 loc) · 1.98 KB

TwinLiteNet ONNX Model Inference with OpenCV DNN

This repository contains a C++ implementation for performing inference with the state-of-the-art TwinLiteNet model using OpenCV's DNN module. TwinLiteNet is a cutting-edge lane detection and drivable area segmentation model. This implementation offers support for both CUDA and CPU inference through build options.

Detection Results

Acknowledgment 🌟

I would like to express sincere gratitude to the creators of the TwinLiteNet model for their remarkable work .Their open-source contribution has had a profound impact on the community and has paved the way for numerous applications in autonomous driving, robotics, and beyond.Thank you for your exceptional work.


Project Structure

.
├── CMakeLists.txt
├── LICENSE
├── README.md
├── assets
├── include
│   └── twinlitenet_dnn.hpp
├── models
│   └── best.onnx
└── src
    ├── main.cpp
    └── twinlitenet_dnn.cpp

Requirements

  • OpenCV 4.8 +

Build Options

  • CUDA Inference: To enable CUDA support for GPU acceleration, build with the -DENABLE_CUDA=ON CMake option.
  • CPU Inference: For CPU-based inference, no additional options are required.

Usage

  1. Clone this repository.
  2. Build the project using CMake with your preferred build options.
mkdir build
cd build
cmake  -DENABLE_CUDA=ON ..
make -j8
  1. Execute ./main and Enjoy accurate lane detection and drivable area results!

License

This project is licensed under the MIT License. Feel free to use it in both open-source and commercial applications.


Extras