Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
inference
pytorch
text-detection
nvidia-docker
inference-server
tensorrt
inference-engine
onnx
onnx-torch
tensorrt-conversion
triton-inference-server
text-detection-from-image
-
Updated
Aug 18, 2021 - Python