Skip to content

Commit

Permalink
Update version to 0.26.0
Browse files Browse the repository at this point in the history
  • Loading branch information
deliahu committed Jan 6, 2021
1 parent c3c1da7 commit de0222a
Show file tree
Hide file tree
Showing 25 changed files with 69 additions and 69 deletions.
2 changes: 1 addition & 1 deletion build/build-image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ set -euo pipefail

ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

CORTEX_VERSION=master
CORTEX_VERSION=0.26.0

image=$1
dir="${ROOT}/images/${image/-slim}"
Expand Down
2 changes: 1 addition & 1 deletion build/cli.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ set -euo pipefail

ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

CORTEX_VERSION=master
CORTEX_VERSION=0.26.0

arg1=${1:-""}
upload="false"
Expand Down
2 changes: 1 addition & 1 deletion build/push-image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

set -euo pipefail

CORTEX_VERSION=master
CORTEX_VERSION=0.26.0

image=$1

Expand Down
8 changes: 4 additions & 4 deletions docs/clients/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ pip install cortex
```

<!-- CORTEX_VERSION_README x2 -->
To install or upgrade to a specific version (e.g. v0.25.0):
To install or upgrade to a specific version (e.g. v0.26.0):

```bash
pip install cortex==0.25.0
pip install cortex==0.26.0
```

To upgrade to the latest version:
Expand All @@ -25,8 +25,8 @@ pip install --upgrade cortex

<!-- CORTEX_VERSION_README x2 -->
```bash
# For example to download CLI version 0.25.0 (Note the "v"):
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.25.0/get-cli.sh)"
# For example to download CLI version 0.26.0 (Note the "v"):
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.26.0/get-cli.sh)"
```

By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.
Expand Down
2 changes: 1 addition & 1 deletion docs/clients/python.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Deploy an API.

**Arguments**:

- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/ for schema.
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.26/ for schema.
- `predictor` - A Cortex Predictor class implementation. Not required when deploying a traffic splitter.
- `requirements` - A list of PyPI dependencies that will be installed before the predictor class implementation is invoked.
- `conda_packages` - A list of Conda dependencies that will be installed before the predictor class implementation is invoked.
Expand Down
26 changes: 13 additions & 13 deletions docs/clusters/aws/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,17 +89,17 @@ The docker images used by the Cortex cluster can also be overridden, although th
<!-- CORTEX_VERSION_BRANCH_STABLE -->
```yaml
image_operator: quay.io/cortexlabs/operator:master
image_manager: quay.io/cortexlabs/manager:master
image_downloader: quay.io/cortexlabs/downloader:master
image_request_monitor: quay.io/cortexlabs/request-monitor:master
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:master
image_metrics_server: quay.io/cortexlabs/metrics-server:master
image_inferentia: quay.io/cortexlabs/inferentia:master
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:master
image_nvidia: quay.io/cortexlabs/nvidia:master
image_fluentd: quay.io/cortexlabs/fluentd:master
image_statsd: quay.io/cortexlabs/statsd:master
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
image_operator: quay.io/cortexlabs/operator:0.26.0
image_manager: quay.io/cortexlabs/manager:0.26.0
image_downloader: quay.io/cortexlabs/downloader:0.26.0
image_request_monitor: quay.io/cortexlabs/request-monitor:0.26.0
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.26.0
image_metrics_server: quay.io/cortexlabs/metrics-server:0.26.0
image_inferentia: quay.io/cortexlabs/inferentia:0.26.0
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.26.0
image_nvidia: quay.io/cortexlabs/nvidia:0.26.0
image_fluentd: quay.io/cortexlabs/fluentd:0.26.0
image_statsd: quay.io/cortexlabs/statsd:0.26.0
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.26.0
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.26.0
```
14 changes: 7 additions & 7 deletions docs/clusters/gcp/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,11 @@ The docker images used by the Cortex cluster can also be overridden, although th

<!-- CORTEX_VERSION_BRANCH_STABLE -->
```yaml
image_operator: quay.io/cortexlabs/operator:master
image_manager: quay.io/cortexlabs/manager:master
image_downloader: quay.io/cortexlabs/downloader:master
image_statsd: quay.io/cortexlabs/statsd:master
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
image_pause: quay.io/cortexlabs/pause:master
image_operator: quay.io/cortexlabs/operator:0.26.0
image_manager: quay.io/cortexlabs/manager:0.26.0
image_downloader: quay.io/cortexlabs/downloader:0.26.0
image_statsd: quay.io/cortexlabs/statsd:0.26.0
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.26.0
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.26.0
image_pause: quay.io/cortexlabs/pause:0.26.0
```
8 changes: 4 additions & 4 deletions docs/workloads/batch/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master or quay.io/cortexlabs/python-predictor-gpu:master based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.26.0 or quay.io/cortexlabs/python-predictor-gpu:0.26.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down Expand Up @@ -46,8 +46,8 @@
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:master or quay.io/cortexlabs/tensorflow-serving-cpu:master based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.26.0)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.26.0 or quay.io/cortexlabs/tensorflow-serving-cpu:0.26.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down Expand Up @@ -77,7 +77,7 @@
...
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:master or quay.io/cortexlabs/onnx-predictor-cpu:master based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.26.0 or quay.io/cortexlabs/onnx-predictor-cpu:0.26.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down
4 changes: 2 additions & 2 deletions docs/workloads/batch/predictors.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ class TensorFlowPredictor:
```

<!-- CORTEX_VERSION_MINOR -->
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.26/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.

When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`).

Expand Down Expand Up @@ -202,6 +202,6 @@ class ONNXPredictor:
```

<!-- CORTEX_VERSION_MINOR -->
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.26/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.

When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`).
26 changes: 13 additions & 13 deletions docs/workloads/dependencies/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,19 @@ mkdir my-api && cd my-api && touch Dockerfile
Cortex's base Docker images are listed below. Depending on the Cortex Predictor and compute type specified in your API configuration, choose one of these images to use as the base for your Docker image:

<!-- CORTEX_VERSION_BRANCH_STABLE x12 -->
* Python Predictor (CPU): `quay.io/cortexlabs/python-predictor-cpu-slim:master`
* Python Predictor (CPU): `quay.io/cortexlabs/python-predictor-cpu-slim:0.26.0`
* Python Predictor (GPU): choose one of the following:
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda10.0-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda10.1-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda10.1-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda10.2-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda10.2-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda11.0-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:master-cuda11.1-cudnn8`
* Python Predictor (Inferentia): `quay.io/cortexlabs/python-predictor-inf-slim:master`
* TensorFlow Predictor (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-predictor-slim:master`
* ONNX Predictor (CPU): `quay.io/cortexlabs/onnx-predictor-cpu-slim:master`
* ONNX Predictor (GPU): `quay.io/cortexlabs/onnx-predictor-gpu-slim:master`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.26.0-cuda10.0-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.26.0-cuda10.1-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.26.0-cuda10.1-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.26.0-cuda10.2-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.26.0-cuda10.2-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.26.0-cuda11.0-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.26.0-cuda11.1-cudnn8`
* Python Predictor (Inferentia): `quay.io/cortexlabs/python-predictor-inf-slim:0.26.0`
* TensorFlow Predictor (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-predictor-slim:0.26.0`
* ONNX Predictor (CPU): `quay.io/cortexlabs/onnx-predictor-cpu-slim:0.26.0`
* ONNX Predictor (GPU): `quay.io/cortexlabs/onnx-predictor-gpu-slim:0.26.0`

Note: the images listed above use the `-slim` suffix; Cortex's default API images are not `-slim`, since they have additional dependencies installed to cover common use cases. If you are building your own Docker image, starting with a `-slim` Predictor image will result in a smaller image size.

Expand All @@ -33,7 +33,7 @@ The sample `Dockerfile` below inherits from Cortex's Python CPU serving image, a
```dockerfile
# Dockerfile

FROM quay.io/cortexlabs/python-predictor-cpu-slim:master
FROM quay.io/cortexlabs/python-predictor-cpu-slim:0.26.0

RUN apt-get update \
&& apt-get install -y tree \
Expand Down
8 changes: 4 additions & 4 deletions docs/workloads/realtime/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
threads_per_process: <int> # the number of threads per process (default: 1)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master or quay.io/cortexlabs/python-predictor-gpu:master based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.26.0 or quay.io/cortexlabs/python-predictor-gpu:0.26.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down Expand Up @@ -81,8 +81,8 @@
threads_per_process: <int> # the number of threads per process (default: 1)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:master or quay.io/cortexlabs/tensorflow-serving-cpu:master based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.26.0)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.26.0 or quay.io/cortexlabs/tensorflow-serving-cpu:0.26.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down Expand Up @@ -133,7 +133,7 @@
threads_per_process: <int> # the number of threads per process (default: 1)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:master or quay.io/cortexlabs/onnx-predictor-cpu:master based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.26.0 or quay.io/cortexlabs/onnx-predictor-cpu:0.26.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down
Loading

0 comments on commit de0222a

Please sign in to comment.