From 2ae8bafae1fa85eac8c1cda9cdad70181b23373f Mon Sep 17 00:00:00 2001 From: Vishal Bollu Date: Mon, 23 Sep 2019 18:47:19 -0400 Subject: [PATCH] Colab links (#486) --- examples/image-classifier/README.md | 7 ++++++- examples/iris-classifier/README.md | 7 +++++++ examples/sentiment-analysis/README.md | 5 ++++- examples/text-generator/README.md | 6 ++++-- 4 files changed, 21 insertions(+), 4 deletions(-) diff --git a/examples/image-classifier/README.md b/examples/image-classifier/README.md index 7568a1aed7..f8afd00a34 100644 --- a/examples/image-classifier/README.md +++ b/examples/image-classifier/README.md @@ -4,6 +4,8 @@ This example shows how to deploy an Image Classifier made with Pytorch. The Pyto ## Define a deployment +A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the model from the `cortex-examples` S3 bucket, preprocess the request payload and postprocess the model inference with the functions defined in `alexnet_handler.py`. + ```yaml - kind: deployment name: image-classifier @@ -13,8 +15,11 @@ This example shows how to deploy an Image Classifier made with Pytorch. The Pyto model: s3://cortex-examples/image-classifier/alexnet.onnx request_handler: alexnet_handler.py ``` + +You can run the code that generated the exported models used in this example folder here: +- [Pytorch Alexnet](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/image-classifier/alexnet.ipynb) +- [Tensorflow Inception V3](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/image-classifier/inception.ipynb) -A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the model from the `cortex-examples` S3 bucket, preprocess the request payload and postprocess the model inference with the functions defined in `alexnet_handler.py`. ## Add request handling diff --git a/examples/iris-classifier/README.md b/examples/iris-classifier/README.md index aacc3ca325..73310168ed 100644 --- a/examples/iris-classifier/README.md +++ b/examples/iris-classifier/README.md @@ -15,6 +15,13 @@ Define a `deployment` and an `api` resource in `cortex.yaml`. A `deployment` spe model: s3://cortex-examples/iris-classifier/tensorflow request_handler: handlers/tensorflow.py ``` + +You can run the code that generated the exported models used in this folder example here: +- [Tensorflow](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/iris-classifier/models/tensorflow.ipynb) +- [Pytorch](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/iris-classifier/models/pytorch.ipynb) +- [Keras](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/iris-classifier/models/keras.ipynb) +- [XGBoost](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/iris-classifier/models/xgboost.ipynb) +- [sklearn](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/iris-classifier/models/sklearn.ipynb) ## Add request handling diff --git a/examples/sentiment-analysis/README.md b/examples/sentiment-analysis/README.md index 7b00ac806f..ef728fe6b0 100644 --- a/examples/sentiment-analysis/README.md +++ b/examples/sentiment-analysis/README.md @@ -4,6 +4,8 @@ This example shows how to deploy a sentiment analysis classifier trained using [ ## Define a deployment +A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the model from the `cortex-examples` S3 bucket and preprocess the payload and postprocess the inference with functions defined in `sentiment.py`. + ```yaml - kind: deployment name: sentiment @@ -13,8 +15,9 @@ This example shows how to deploy a sentiment analysis classifier trained using [ model: s3://cortex-examples/sentiment-analysis/bert request_handler: sentiment.py ``` + +You can run the code that generated the exported BERT model [here](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/sentiment-analysis/bert.ipynb). -A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the model from the `cortex-examples` S3 bucket and preprocess the payload and postprocess the inference with functions defined in `sentiment.py`. ## Add request handling diff --git a/examples/text-generator/README.md b/examples/text-generator/README.md index 8daf700f1a..e818e67fa7 100644 --- a/examples/text-generator/README.md +++ b/examples/text-generator/README.md @@ -4,6 +4,8 @@ This example shows how to deploy OpenAI's GPT-2 model as a service on AWS. ## Define a deployment +A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the 124M GPT-2 model from the `cortex-examples` S3 bucket, preprocess the payload and postprocess the inference with functions defined in `encoder.py` and deploy each replica of the API on 1 GPU. + ```yaml - kind: deployment name: text @@ -15,8 +17,8 @@ This example shows how to deploy OpenAI's GPT-2 model as a service on AWS. compute: gpu: 1 ``` - -A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the 124M GPT-2 model from the `cortex-examples` S3 bucket, preprocess the payload and postprocess the inference with functions defined in `encoder.py` and deploy each replica of the API on 1 GPU. + +You can run the code that generated the exported GPT-2 model [here](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/text-generator/gpt-2.ipynb). ## Add request handling