Skip to content

Simple example of using TorchServe to serve a PyTorch Object Detection model

License

Notifications You must be signed in to change notification settings

dimitreOliveira/torchserve_od_example

Repository files navigation

Simple example of using TorchServe to serve a PyTorch Object Detection model

Repository content

Local Setup

Optional

Docker Setup

General Setup

Download FastRCNN model weights

sh scripts/get_fastrcnn.sh

Run locally

Archive model

sh scripts/archive_model.sh

Start TorchServe

sh scripts/start_torchserve.sh

Stop TorchServe

torchserve --stop

Run with Docker

Build Docker image from the Dockerfile

sudo docker build -f Dockerfile -t docker_torchserve .

Run the Docker container

sudo docker run -p 8080:8080 -u 0 -ti -v $(pwd)/models/:/home/model-server/models/ docker_torchserve /bin/bash

Archive model

sh scripts/archive_model.sh

Start TorchServe

sh scripts/start_torchserve.sh

Stop TorchServe

torchserve --stop

Run with Docker compose

Build image and run with Build and run your app with Docker compose

sudo docker-compose up

Stop the application

docker-compose down

Inference

Run sample inference using REST APIs

curl http://127.0.0.1:8080/predictions/fastrcnn -T ./samples/man2.jpg

Or iteratively run the "query_notebook.ipynb" notebook

Content

  • models — Model's assets.
  • samples — Image samples used to test inference.
  • scripts — Scripts for general usage.
  • utils — Utility files.
  • query_notebook — Jupyter notebook for iterative inference.

References

Releases

No releases published

Packages

No packages published

Languages