You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have been investigating and the model was registered properly in Mlflow. We have confirmed this by using just mlflow to make the prediction, and it works (note that we are using specific functions created by us that call the mlflow and minio SDKs, custom_mlflow is just mlflow overriding the s3 artifact repository class):
The path saved at downloaded_model contains the artifacts generated by mlflow (saved in MinIO and downloaded to the host)
frommodules.minioimportfunctionsasfunctions_miniofrommodules.mlflowimportfunctionsfrommodules.mlflow.custom_mlflowimportcustom_mlflowimportnumpyasnpfunctions_minio.cached_get_minio_client(url="MASKED", user="MASKED", password="MASKED")
mlflow_client=functions.cached_get_mlflow_client(url="MASKED", minio_url="MASKED")
sample_data=np.array([[5.1, 3.5, 3.4, 2.2]])
# ...# Load model as a PyFuncModel.downloaded_model="tests/mlflow/models/onnx_model_folder"loaded_model=custom_mlflow.mlflow.pyfunc.load_model(downloaded_model)
print(loaded_model.predict(sample_data))
We must get a response with the prediction. We have already tested with models created by other frameworks: keras, sklearn, pytorch, etc (all of them registered in MLFlow) and all the deployments work.
We found an error when trying to request inference to an ONNX model registered in MLflow (using MinIO as artifacts repository).
When sending a request to a MLflow server that deploys an Onnx model, the following response is shown:
{'status': {'code': -1, 'info': 'Object of type ndarray is not JSON serializable', 'reason': 'MICROSERVICE_INTERNAL_ERROR', 'status': 1}}
The manifest used to create the SeldonDeployment object is the following:
This snippet:
was needed to avoid the already fixed issue mlflow/mlflow#9185.
We have been investigating and the model was registered properly in Mlflow. We have confirmed this by using just mlflow to make the prediction, and it works (note that we are using specific functions created by us that call the mlflow and minio SDKs, custom_mlflow is just mlflow overriding the s3 artifact repository class):
The path saved at
downloaded_model
contains the artifacts generated by mlflow (saved in MinIO and downloaded to the host)The output is:
{'output_label': array([1], dtype=int64), 'output_probability': [{0: 0.009999999776482582, 1: 0.6399998664855957, 2: 0.3499999940395355}]}
which is actually the expected output and what must be shown when requesting the prediction to the seldondeployment server.
We have also added the following snippet:
and no extra information is shown in the logs.
To reproduce
Steps shown above.
Expected behaviour
We must get a response with the prediction. We have already tested with models created by other frameworks: keras, sklearn, pytorch, etc (all of them registered in MLFlow) and all the deployments work.
Environment
Charmed Kubeflow
Kubernetes Cluster Version: 1.24
Deployed Seldon System Images:
$ microk8s kubectl get --namespace seldon-system deploy seldon-co ntroller-manager -o yaml | grep seldonio value: docker.io/seldonio/seldon-core-executor:1.17.1 image: docker.io/seldonio/seldon-core-operator:1.17.1
Model Details
Model generated with the following function:
The text was updated successfully, but these errors were encountered: