Streamlit Dockerized Computer Vision App with Triton Inference Server and PostgreSQL database
-
Updated
May 16, 2024 - Python
Streamlit Dockerized Computer Vision App with Triton Inference Server and PostgreSQL database
Using Packer, Ansible and Terraform to create a small Triton Inference cluster on AWS
Serving YOLOv5 Segmentation Model with Amazon EC2 Inf1
The Sumen model integrates with Triton Inference Server
Application-aware System Optimization Lab Research Study
Proof-of-concept gRPC stream record-and-replay for Triton Inference Server
This repository is a code sample to serve Large Language Models (LLM) on a Google Kubernetes Engine (GKE) cluster with GPUs running NVIDIA Triton Inference Server with FasterTransformer backend.
QuickStart for Deploying a Basic Model on the Triton Inference Server
Heterogeneous System ML Pipeline Scheduling Framework with Triton Inference Server as Backend
This project provide sample for triton inference server
This repository demonstrates instance segmentation using YOLOv8 (smart) model on Triton Inference Server
Cassandra plugin for NVIDIA DALI
This repository contains the content for a proof of concept implementation of computer vision systems in industry. The project explores scalability and performance using the NVIDIA ecosystem, aiming to create an example scaffold for implementing a system accessible to non-technical users.
A library for interfacing with Triton.
It is responsible for AI inference, connecting to triton-agent to perform inference requests efficiently.
Search Engine on Shopee apply Image Search, Full-text Search, Auto-complete
Custom Yolov8x-cls edge model deployment and training to classify trash vs recycling.
Triton inference server with Python backend and transformers
Run Multiple Models on the Same GPU with Amazon SageMaker Multi-Model Endpoints Powered by NVIDIA Triton Inference Server. A Java client is also provided.
A complete containerized setup for Triton inference server and its python client using a realistic pre-trained XGBoost classifier model.
Add a description, image, and links to the triton-inference-server topic page so that developers can more easily learn about it.
To associate your repository with the triton-inference-server topic, visit your repo's landing page and select "manage topics."