Skip to content

Commit

Permalink
Update stable version to 0.31.1
Browse files Browse the repository at this point in the history
  • Loading branch information
vishalbollu committed Mar 23, 2021
1 parent 98a0681 commit c954541
Show file tree
Hide file tree
Showing 9 changed files with 68 additions and 68 deletions.
8 changes: 4 additions & 4 deletions docs/clients/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ pip install cortex
```

<!-- CORTEX_VERSION_README x2 -->
To install or upgrade to a specific version (e.g. v0.31.0):
To install or upgrade to a specific version (e.g. v0.31.1):

```bash
pip install cortex==0.31.0
pip install cortex==0.31.1
```

To upgrade to the latest version:
Expand All @@ -25,8 +25,8 @@ pip install --upgrade cortex

<!-- CORTEX_VERSION_README x2 -->
```bash
# For example to download CLI version 0.31.0 (Note the "v"):
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.31.0/get-cli.sh)"
# For example to download CLI version 0.31.1 (Note the "v"):
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.31.1/get-cli.sh)"
```

By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.
Expand Down
44 changes: 22 additions & 22 deletions docs/clusters/aws/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,26 +101,26 @@ The docker images used by the Cortex cluster can also be overridden, although th
<!-- CORTEX_VERSION_BRANCH_STABLE -->
```yaml
image_operator: quay.io/cortexlabs/operator:0.31.0
image_manager: quay.io/cortexlabs/manager:0.31.0
image_downloader: quay.io/cortexlabs/downloader:0.31.0
image_request_monitor: quay.io/cortexlabs/request-monitor:0.31.0
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.31.0
image_metrics_server: quay.io/cortexlabs/metrics-server:0.31.0
image_inferentia: quay.io/cortexlabs/inferentia:0.31.0
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.31.0
image_nvidia: quay.io/cortexlabs/nvidia:0.31.0
image_fluent_bit: quay.io/cortexlabs/fluent-bit:0.31.0
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.31.0
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.31.0
image_prometheus: quay.io/cortexlabs/prometheus:0.31.0
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.31.0
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.31.0
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.31.0
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:0.31.0
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:0.31.0
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:0.31.0
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:0.31.0
image_grafana: quay.io/cortexlabs/grafana:0.31.0
image_event_exporter: quay.io/cortexlabs/event-exporter:0.31.0
image_operator: quay.io/cortexlabs/operator:0.31.1
image_manager: quay.io/cortexlabs/manager:0.31.1
image_downloader: quay.io/cortexlabs/downloader:0.31.1
image_request_monitor: quay.io/cortexlabs/request-monitor:0.31.1
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.31.1
image_metrics_server: quay.io/cortexlabs/metrics-server:0.31.1
image_inferentia: quay.io/cortexlabs/inferentia:0.31.1
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.31.1
image_nvidia: quay.io/cortexlabs/nvidia:0.31.1
image_fluent_bit: quay.io/cortexlabs/fluent-bit:0.31.1
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.31.1
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.31.1
image_prometheus: quay.io/cortexlabs/prometheus:0.31.1
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.31.1
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.31.1
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.31.1
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:0.31.1
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:0.31.1
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:0.31.1
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:0.31.1
image_grafana: quay.io/cortexlabs/grafana:0.31.1
image_event_exporter: quay.io/cortexlabs/event-exporter:0.31.1
```
34 changes: 17 additions & 17 deletions docs/clusters/gcp/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,21 +68,21 @@ The docker images used by the Cortex cluster can also be overridden, although th
<!-- CORTEX_VERSION_BRANCH_STABLE -->
```yaml
image_operator: quay.io/cortexlabs/operator:0.31.0
image_manager: quay.io/cortexlabs/manager:0.31.0
image_downloader: quay.io/cortexlabs/downloader:0.31.0
image_request_monitor: quay.io/cortexlabs/request-monitor:0.31.0
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.31.0
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.31.0
image_google_pause: quay.io/cortexlabs/google-pause:0.31.0
image_prometheus: quay.io/cortexlabs/prometheus:0.31.0
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.31.0
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.31.0
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.31.0
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:0.31.0
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:0.31.0
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:0.31.0
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:0.31.0
image_grafana: quay.io/cortexlabs/grafana:0.31.0
image_event_exporter: quay.io/cortexlabs/event-exporter:0.31.0
image_operator: quay.io/cortexlabs/operator:0.31.1
image_manager: quay.io/cortexlabs/manager:0.31.1
image_downloader: quay.io/cortexlabs/downloader:0.31.1
image_request_monitor: quay.io/cortexlabs/request-monitor:0.31.1
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.31.1
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.31.1
image_google_pause: quay.io/cortexlabs/google-pause:0.31.1
image_prometheus: quay.io/cortexlabs/prometheus:0.31.1
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.31.1
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.31.1
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.31.1
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:0.31.1
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:0.31.1
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:0.31.1
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:0.31.1
image_grafana: quay.io/cortexlabs/grafana:0.31.1
image_event_exporter: quay.io/cortexlabs/event-exporter:0.31.1
```
2 changes: 1 addition & 1 deletion docs/workloads/async/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ predictor:
shell: <string> # relative path to a shell script for system package installation (default: dependencies.sh)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.31.0, quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda10.2-cudnn8, or quay.io/cortexlabs/python-predictor-inf:0.31.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.31.1, quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda10.2-cudnn8, or quay.io/cortexlabs/python-predictor-inf:0.31.1 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down
8 changes: 4 additions & 4 deletions docs/workloads/batch/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ predictor:
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.31.0 or quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda10.2-cudnn8 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.31.1 or quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda10.2-cudnn8 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down Expand Up @@ -49,8 +49,8 @@ predictor:
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.31.0)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.31.0 or quay.io/cortexlabs/tensorflow-serving-gpu:0.31.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.31.1)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.31.1 or quay.io/cortexlabs/tensorflow-serving-gpu:0.31.1 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand All @@ -75,7 +75,7 @@ predictor:
...
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:0.31.0 or quay.io/cortexlabs/onnx-predictor-gpu:0.31.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:0.31.1 or quay.io/cortexlabs/onnx-predictor-gpu:0.31.1 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down
28 changes: 14 additions & 14 deletions docs/workloads/dependencies/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,27 +11,27 @@ mkdir my-api && cd my-api && touch Dockerfile
Cortex's base Docker images are listed below. Depending on the Cortex Predictor and compute type specified in your API configuration, choose one of these images to use as the base for your Docker image:

<!-- CORTEX_VERSION_BRANCH_STABLE x12 -->
* Python Predictor (CPU): `quay.io/cortexlabs/python-predictor-cpu:0.31.0`
* Python Predictor (CPU): `quay.io/cortexlabs/python-predictor-cpu:0.31.1`
* Python Predictor (GPU): choose one of the following:
* `quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda10.0-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda10.1-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda10.1-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda10.2-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda10.2-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda11.0-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda11.1-cudnn8`
* Python Predictor (Inferentia): `quay.io/cortexlabs/python-predictor-inf:0.31.0`
* TensorFlow Predictor (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-predictor:0.31.0`
* ONNX Predictor (CPU): `quay.io/cortexlabs/onnx-predictor-cpu:0.31.0`
* ONNX Predictor (GPU): `quay.io/cortexlabs/onnx-predictor-gpu:0.31.0`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda10.0-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda10.1-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda10.1-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda10.2-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda10.2-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda11.0-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda11.1-cudnn8`
* Python Predictor (Inferentia): `quay.io/cortexlabs/python-predictor-inf:0.31.1`
* TensorFlow Predictor (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-predictor:0.31.1`
* ONNX Predictor (CPU): `quay.io/cortexlabs/onnx-predictor-cpu:0.31.1`
* ONNX Predictor (GPU): `quay.io/cortexlabs/onnx-predictor-gpu:0.31.1`

The sample `Dockerfile` below inherits from Cortex's Python CPU serving image, and installs 3 packages. `tree` is a system package and `pandas` and `rdkit` are Python packages.

<!-- CORTEX_VERSION_BRANCH_STABLE -->
```dockerfile
# Dockerfile

FROM quay.io/cortexlabs/python-predictor-cpu:0.31.0
FROM quay.io/cortexlabs/python-predictor-cpu:0.31.1

RUN apt-get update \
&& apt-get install -y tree \
Expand All @@ -49,7 +49,7 @@ If you need to upgrade the Python Runtime version on your image, you can follow
```Dockerfile
# Dockerfile

FROM quay.io/cortexlabs/python-predictor-cpu:0.31.0
FROM quay.io/cortexlabs/python-predictor-cpu:0.31.1

# upgrade python runtime version
RUN conda update -n base -c defaults conda
Expand Down
8 changes: 4 additions & 4 deletions docs/workloads/realtime/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ predictor:
threads_per_process: <int> # the number of threads per process (default: 1)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.31.0, quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda10.2-cudnn8, or quay.io/cortexlabs/python-predictor-inf:0.31.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.31.1, quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda10.2-cudnn8, or quay.io/cortexlabs/python-predictor-inf:0.31.1 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down Expand Up @@ -74,8 +74,8 @@ predictor:
threads_per_process: <int> # the number of threads per process (default: 1)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.31.0)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.31.0, quay.io/cortexlabs/tensorflow-serving-gpu:0.31.0, or quay.io/cortexlabs/tensorflow-serving-inf:0.31.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.31.1)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.31.1, quay.io/cortexlabs/tensorflow-serving-gpu:0.31.1, or quay.io/cortexlabs/tensorflow-serving-inf:0.31.1 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down Expand Up @@ -105,7 +105,7 @@ predictor:
threads_per_process: <int> # the number of threads per process (default: 1)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:0.31.0 or quay.io/cortexlabs/onnx-predictor-gpu:0.31.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:0.31.1 or quay.io/cortexlabs/onnx-predictor-gpu:0.31.1 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down
2 changes: 1 addition & 1 deletion docs/workloads/task/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
conda: <string> # relative path to conda-packages.txt (default: conda-packages.txt)
shell: <string> # relative path to a shell script for system package installation (default: dependencies.sh)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Task (default: quay.io/cortexlabs/python-predictor-cpu:0.31.0, quay.io/cortexlabs/python-predictor-gpu:0.31.0-cuda10.2-cudnn8, or quay.io/cortexlabs/python-predictor-inf:0.31.0 based on compute)
image: <string> # docker image to use for the Task (default: quay.io/cortexlabs/python-predictor-cpu:0.31.1, quay.io/cortexlabs/python-predictor-gpu:0.31.1-cuda10.2-cudnn8, or quay.io/cortexlabs/python-predictor-inf:0.31.1 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
networking:
Expand Down
2 changes: 1 addition & 1 deletion get-cli.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

set -e

CORTEX_VERSION_BRANCH_STABLE=0.31.0
CORTEX_VERSION_BRANCH_STABLE=0.31.1
CORTEX_INSTALL_PATH="${CORTEX_INSTALL_PATH:-/usr/local/bin/cortex}"

# replace ~ with the home directory path
Expand Down

0 comments on commit c954541

Please sign in to comment.