Skip to content

Commit

Permalink
fix(docs) Updated the name from v2 to OIP (#6030)
Browse files Browse the repository at this point in the history
* updated the name from v2 to OIP

* Update doc/source/analytics/explainers.md

Co-authored-by: Lucian Carata <lc525@users.noreply.github.com>

* Update doc/source/examples/notebooks.rst

Co-authored-by: Lucian Carata <lc525@users.noreply.github.com>

* Update doc/source/examples/notebooks.rst

Co-authored-by: Lucian Carata <lc525@users.noreply.github.com>

---------

Co-authored-by: Rakavitha Kodhandapani <seldon@SELIN002.local>
Co-authored-by: Lucian Carata <lc525@users.noreply.github.com>
  • Loading branch information
3 people authored Nov 10, 2024
1 parent f993cbb commit 5509dc0
Show file tree
Hide file tree
Showing 9 changed files with 25 additions and 27 deletions.
8 changes: 4 additions & 4 deletions doc/source/analytics/explainers.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

Seldon provides model explanations using its [Alibi](https://github.com/SeldonIO/alibi) library.

We support explainers saved using python 3.7 in v1 explainer server. However, for v2 protocol (using MLServer) this is not a requirement anymore.
The v1 explainer server supports explainers saved with Python 3.7. However, for the Open Inference Protocol (or V2 protocol) using MLServer, this requirement is no longer necessary.

| Package | Version |
| ------ | ----- |
Expand Down Expand Up @@ -36,9 +36,9 @@ For Alibi explainers that need to be trained you should

The runtime environment in our [Alibi Explain Server](https://github.com/SeldonIO/seldon-core/tree/master/components/alibi-explain-server) is locked using [Poetry](https://python-poetry.org/). See our e2e example [here](../examples/iris_explainer_poetry.html) on how to use that definition to train your explainers.

### V2 protocol for explainer using [MLServer](https://github.com/SeldonIO/MLServer) (incubating)
### Open Inference Protocol for explainer using [MLServer](https://github.com/SeldonIO/MLServer)

The support for v2 protocol is now handled with MLServer moving forward. This is experimental
The support for Open Inference Protocol is now handled with MLServer moving forward. This is experimental
and only works for black-box explainers.

For an e2e example, please check AnchorTabular notebook [here](../examples/iris_anchor_tabular_explainer_v2.html).
Expand Down Expand Up @@ -82,7 +82,7 @@ If you were port forwarding to Ambassador or istio on localhost:8003 then the AP
http://localhost:8003/seldon/seldon/income-explainer/default/api/v1.0/explain
```

The explain method is also supported for tensorflow and v2 protocols. The full list of endpoint URIs is:
The explain method is also supported for tensorflow and Open Inference protocols. The full list of endpoint URIs is:

| Protocol | URI |
| ------ | ----- |
Expand Down
6 changes: 3 additions & 3 deletions doc/source/examples/notebooks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Prepackaged Inference Server Examples
Deploy a Scikit-learn Model Binary <../servers/sklearn.md>
Deploy a Tensorflow Exported Model <../servers/tensorflow.md>
MLflow Pre-packaged Model Server A/B Test <mlflow_server_ab_test_ambassador>
MLflow v2 Protocol End to End Workflow (Incubating) <mlflow_v2_protocol_end_to_end>
MLflow Open Inference Protocol End to End Workflow <mlflow_v2_protocol_end_to_end>
Deploy a XGBoost Model Binary <../servers/xgboost.md>
Deploy Pre-packaged Model Server with Cluster's MinIO <minio-sklearn>
Custom Pre-packaged LightGBM Server <custom_server>
Expand Down Expand Up @@ -90,7 +90,7 @@ Advanced Machine Learning Monitoring

Real Time Monitoring of Statistical Metrics <feedback_reward_custom_metrics>
Model Explainer Example <iris_explainer_poetry>
Model Explainer V2 protocol Example (Incubating) <iris_anchor_tabular_explainer_v2>
Model Explainer Open Inference Protocol Example <iris_anchor_tabular_explainer_v2>
Outlier Detection on CIFAR10 <outlier_cifar10>
Training Outlier Detector for CIFAR10 with Poetry <cifar10_od_poetry>

Expand Down Expand Up @@ -155,7 +155,7 @@ Complex Graph Examples
:titlesonly:

Chainer MNIST <chainer_mnist>
Custom pre-processors with the V2 Protocol <transformers-v2-protocol>
Custom pre-processors with the Open Inference Protocol <transformers-v2-protocol>
graph-examples <graph-examples>

Ingress
Expand Down
4 changes: 2 additions & 2 deletions doc/source/graph/protocols.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Seldon Core supports the following data planes:

* [REST and gRPC Seldon protocol](#rest-and-grpc-seldon-protocol)
* [REST and gRPC Tensorflow Serving Protocol](#rest-and-grpc-tensorflow-protocol)
* [REST and gRPC V2 Protocol](#v2-protocol)
* [REST and gRPC Open Inference Protocol](#v2-protocol)

## REST and gRPC Seldon Protocol

Expand Down Expand Up @@ -40,7 +40,7 @@ General considerations:
* The name of the model in the `graph` section of the SeldonDeployment spec must match the name of the model loaded onto the Tensorflow Server.


## V2 Protocol
## Open Inference Protocol (or V2 protocol)

Seldon has collaborated with the [NVIDIA Triton Server
Project](https://github.com/triton-inference-server/server) and the [KServe
Expand Down
2 changes: 1 addition & 1 deletion doc/source/graph/svcorch.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ At present, we support the following protocols:
| --- | --- | --- | --- |
| Seldon | `seldon` | [OpenAPI spec for Seldon](https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/openapi.html) |
| Tensorflow | `tensorflow` | [REST API](https://www.tensorflow.org/tfx/serving/api_rest) and [gRPC API](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/apis/prediction_service.proto) reference |
| V2 | `v2` | [V2 Protocol Reference](https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/v2-protocol.html) |
| V2 | `v2` | [Open Inference Protocol Reference](https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/v2-protocol.html) |

These protocols are supported by some of our pre-packaged servers out of the
box.
Expand Down
4 changes: 2 additions & 2 deletions doc/source/reference/release-1.6.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,8 +80,8 @@ This will also help remove any ambiguity around what component we refer to when
* Seldon Operator now runs as non-root by default (with Security context override available)
* Resolved PyYAML CVE from Python base image
* Added support for V2 Protocol in outlier and drift detectors
* Handling V2 Protocol in request logger
* Added support for Open Inference Protocol (or V2 protocol) in outlier and drift detectors
* Handling Open Inference Protocol in request logger
2 changes: 1 addition & 1 deletion doc/source/reference/upgrading.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ Only the v1 versions of the CRD will be supported moving forward. The v1beta1 ve
We have updated the health checks done by Seldon for the model nodes in your inference graph. If `executor.fullHealthChecks` is set to true then:
* For Seldon protocol each node will be probed with `/api/v1.0/health/status`.
* For the v2 protocol each node will be probed with `/v2/health/ready`.
* For the Open Inference Protocol (or V2 protocol) each node will be probed with `/v2/health/ready`.
* For tensorflow just TCP checks will be run on the http endpoint.

By default we have set `executor.fullHealthChecks` to false for 1.14 release as users would need to rebuild their custom python models if they have not implemented the `health_status` method. In future we will default to `true`.
Expand Down
9 changes: 4 additions & 5 deletions doc/source/servers/mlflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,10 +85,9 @@ notebook](../examples/server_examples.html#Serve-MLflow-Elasticnet-Wines-Model)
or check our [talk at the Spark + AI Summit
2019](https://www.youtube.com/watch?v=D6eSfd9w9eA).

## V2 protocol
## Open Inference Protocol (or V2 protocol)

The MLFlow server can also be used to expose an API compatible with the [V2
Protocol](../graph/protocols.md#v2-protocol).
The MLFlow server can also be used to expose an API compatible with the [Open Inference Protocol](../graph/protocols.md#v2-protocol).
Note that, under the hood, it will use the [Seldon
MLServer](https://github.com/SeldonIO/MLServer) runtime.

Expand Down Expand Up @@ -136,7 +135,7 @@ $ gsutil cp -r ../model gs://seldon-models/test/elasticnet_wine_<uuid>
```

- deploy the model to seldon-core
In order to enable support for the V2 protocol, it's enough to
In order to enable support for the Open Inference Protocol, it's enough to
specify the `protocol` of the `SeldonDeployment` to use `v2`.
For example,

Expand All @@ -146,7 +145,7 @@ kind: SeldonDeployment
metadata:
name: mlflow
spec:
protocol: v2 # Activate the v2 protocol
protocol: v2 # Activate the Open Inference Protocol
name: wines
predictors:
- graph:
Expand Down
8 changes: 4 additions & 4 deletions doc/source/servers/sklearn.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,13 +82,13 @@ Acceptable values for the `method` parameter are `predict`, `predict_proba`,
`decision_function`.


## V2 protocol
## Open Inference Protocol (or V2 protocol)

The SKLearn server can also be used to expose an API compatible with the [V2 Protocol](../graph/protocols.md#v2-protocol).
The SKLearn server can also be used to expose an API compatible with the [Open Inference Protocol](../graph/protocols.md#v2-protocol).
Note that, under the hood, it will use the [Seldon
MLServer](https://github.com/SeldonIO/MLServer) runtime.

In order to enable support for the V2 protocol, it's enough to
In order to enable support for the Open Inference Protocol it's enough to
specify the `protocol` of the `SeldonDeployment` to use `v2`.
For example,

Expand All @@ -99,7 +99,7 @@ metadata:
name: sklearn
spec:
name: iris-predict
protocol: v2 # Activate the V2 protocol
protocol: v2 # Activate the Open Inference Protocol
predictors:
- graph:
children: []
Expand Down
9 changes: 4 additions & 5 deletions doc/source/servers/xgboost.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,14 +46,13 @@ spec:
You can try out a [worked notebook](../examples/server_examples.html) with a
similar example.
## V2 protocol
## Open Inference Protocol (or V2 protocol)
The XGBoost server can also be used to expose an API compatible with the [V2
protocol](../graph/protocols.md#v2-protocol).
The XGBoost server can also be used to expose an API compatible with the [Open Inference Protocol](../graph/protocols.md#v2-protocol).
Note that, under the hood, it will use the [Seldon
MLServer](https://github.com/SeldonIO/MLServer) runtime.
In order to enable support for the V2 protocol, it's enough to
In order to enable support for the Open Inference Protocol, it's enough to
specify the `protocol` of the `SeldonDeployment` to use `v2`.
For example,

Expand All @@ -64,7 +63,7 @@ metadata:
name: xgboost
spec:
name: iris
protocol: v2 # Activate the V2 protocol
protocol: v2 # Activate the Open Inference Protocol
predictors:
- graph:
children: []
Expand Down

0 comments on commit 5509dc0

Please sign in to comment.