Skip to content

Commit

Permalink
Colab links (#486)
Browse files Browse the repository at this point in the history
  • Loading branch information
vishalbollu authored Sep 23, 2019
1 parent 406c90c commit 2ae8baf
Show file tree
Hide file tree
Showing 4 changed files with 21 additions and 4 deletions.
7 changes: 6 additions & 1 deletion examples/image-classifier/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ This example shows how to deploy an Image Classifier made with Pytorch. The Pyto

## Define a deployment

A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the model from the `cortex-examples` S3 bucket, preprocess the request payload and postprocess the model inference with the functions defined in `alexnet_handler.py`.

```yaml
- kind: deployment
name: image-classifier
Expand All @@ -13,8 +15,11 @@ This example shows how to deploy an Image Classifier made with Pytorch. The Pyto
model: s3://cortex-examples/image-classifier/alexnet.onnx
request_handler: alexnet_handler.py
```
<!-- CORTEX_VERSION_MINOR x2 -->
You can run the code that generated the exported models used in this example folder here:
- [Pytorch Alexnet](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/image-classifier/alexnet.ipynb)
- [Tensorflow Inception V3](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/image-classifier/inception.ipynb)
A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the model from the `cortex-examples` S3 bucket, preprocess the request payload and postprocess the model inference with the functions defined in `alexnet_handler.py`.
## Add request handling
Expand Down
7 changes: 7 additions & 0 deletions examples/iris-classifier/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,13 @@ Define a `deployment` and an `api` resource in `cortex.yaml`. A `deployment` spe
model: s3://cortex-examples/iris-classifier/tensorflow
request_handler: handlers/tensorflow.py
```
<!-- CORTEX_VERSION_MINOR x5 -->
You can run the code that generated the exported models used in this folder example here:
- [Tensorflow](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/iris-classifier/models/tensorflow.ipynb)
- [Pytorch](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/iris-classifier/models/pytorch.ipynb)
- [Keras](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/iris-classifier/models/keras.ipynb)
- [XGBoost](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/iris-classifier/models/xgboost.ipynb)
- [sklearn](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/iris-classifier/models/sklearn.ipynb)
## Add request handling
Expand Down
5 changes: 4 additions & 1 deletion examples/sentiment-analysis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ This example shows how to deploy a sentiment analysis classifier trained using [

## Define a deployment

A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the model from the `cortex-examples` S3 bucket and preprocess the payload and postprocess the inference with functions defined in `sentiment.py`.

```yaml
- kind: deployment
name: sentiment
Expand All @@ -13,8 +15,9 @@ This example shows how to deploy a sentiment analysis classifier trained using [
model: s3://cortex-examples/sentiment-analysis/bert
request_handler: sentiment.py
```
<!-- CORTEX_VERSION_MINOR -->
You can run the code that generated the exported BERT model [here](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/sentiment-analysis/bert.ipynb).
A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the model from the `cortex-examples` S3 bucket and preprocess the payload and postprocess the inference with functions defined in `sentiment.py`.
## Add request handling
Expand Down
6 changes: 4 additions & 2 deletions examples/text-generator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ This example shows how to deploy OpenAI's GPT-2 model as a service on AWS.

## Define a deployment

A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the 124M GPT-2 model from the `cortex-examples` S3 bucket, preprocess the payload and postprocess the inference with functions defined in `encoder.py` and deploy each replica of the API on 1 GPU.

```yaml
- kind: deployment
name: text
Expand All @@ -15,8 +17,8 @@ This example shows how to deploy OpenAI's GPT-2 model as a service on AWS.
compute:
gpu: 1
```
A `deployment` specifies a set of resources that are deployed as a single unit. An `api` makes a model available as a web service that can serve real-time predictions. This configuration will download the 124M GPT-2 model from the `cortex-examples` S3 bucket, preprocess the payload and postprocess the inference with functions defined in `encoder.py` and deploy each replica of the API on 1 GPU.
<!-- CORTEX_VERSION_MINOR -->
You can run the code that generated the exported GPT-2 model [here](https://colab.research.google.com/github/cortexlabs/cortex/blob/master/examples/text-generator/gpt-2.ipynb).
## Add request handling
Expand Down

0 comments on commit 2ae8baf

Please sign in to comment.