Support for yolov4 darknet #124
-
Hi, Great SDk for inferencing on Video streams parllely, do you guys support for Yolov4 Darknet weights too? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 13 replies
-
Hi @neeraj-satyaki, You can load any model and use any framework. You can either use the ONNX Runtime (so you will provide pre and post process functions only in your stage) or even use Pytorch, Tensorflow, or anything else by creating a process function (and also pre and post process). This is an example of the first, directly loading a model in onnx format: https://github.com/pipeless-ai/pipeless/tree/main/examples/onnx-candy |
Beta Was this translation helpful? Give feedback.
Hi @neeraj-satyaki !
(This is just a productivity advice): It will be more comfortable for you to work if you also mount a volume into
/.venv
because in that way you will avoid installing the Python packages on every docker run, so you will work faster. See this section: https://www.pipeless.ai/docs/docs/v1/container#install-custom-python-packagesRegarding your issue, I just checked it and it seems you are right, there is an issue in the TensorRT container image due to some packages not being found by
libonnxruntime_providers_tensorrt.so
. If you install Pipeless in your host directly it will work fine.I am going to create a new PR fixing the image now and will link it here. Thanks for th…