- Local Setup
- Docker Setup
- General Setup
- Run locally
- Run with Docker
- Run with Docker compose
- Content
- Inference
- References
Optional
- PyTorch - Refer to the official PyTorch installation guide
- Torchserve - Refer to the official Torchserve installation guide
- Docker - Refer to the official docker installation guide
Download FastRCNN model weights
sh scripts/get_fastrcnn.sh
Archive model
sh scripts/archive_model.sh
Start TorchServe
sh scripts/start_torchserve.sh
Stop TorchServe
torchserve --stop
Build Docker image from the Dockerfile
sudo docker build -f Dockerfile -t docker_torchserve .
Run the Docker container
sudo docker run -p 8080:8080 -u 0 -ti -v $(pwd)/models/:/home/model-server/models/ docker_torchserve /bin/bash
Archive model
sh scripts/archive_model.sh
Start TorchServe
sh scripts/start_torchserve.sh
Stop TorchServe
torchserve --stop
Build image and run with Build and run your app with Docker compose
sudo docker-compose up
Stop the application
docker-compose down
Run sample inference using REST APIs
curl http://127.0.0.1:8080/predictions/fastrcnn -T ./samples/man2.jpg
Or iteratively run the "query_notebook.ipynb" notebook
- models — Model's assets.
- samples — Image samples used to test inference.
- scripts — Scripts for general usage.
- utils — Utility files.
- query_notebook — Jupyter notebook for iterative inference.