Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: unpack_from requires a buffer of at least ... bytes for unpacking ... bytes at offset 4 (actual buffer size is ...) #7391

Open
adisabolic opened this issue Jun 28, 2024 · 3 comments

Comments

@adisabolic
Copy link

adisabolic commented Jun 28, 2024

Description
I get the following error when using a TYPE_STRING input field in a triton model with Python backend:

{'error': "Failed to process the request(s) for model instance 'string_test_0', message: error: unpack_from requires a buffer of at least 50529031 bytes for unpacking 50529027 bytes at offset 4 (actual buffer size is 7)\n\nAt:\n  /opt/tritonserver/backends/python/triton_python_backend_utils.py(117): deserialize_bytes_tensor\n"}

Looking at the /opt/tritonserver/backends/python/triton_python_backend_utils.py file, the line 117 is:

sb = struct.unpack_from("<{}s".format(l), val_buf, offset)[0]

Triton Information
What version of Triton are you using?

I am using the docker base image nvcr.io/nvidia/tritonserver:24.04-py3 with CUDA 12.4 installed on my Ubuntu machine 22.04.4 LTS (Jammy Jellyfish). Python version is 3.10.12.

To Reproduce

I was able to make a minimal example for which I get the error.

config.pbtxt of model

name: "string_test"
backend: "python"

input [
  {
    name: "INPUT0"
    data_type: TYPE_STRING
    dims: [ 1 ]
  }
]
output [
  {
    name: "OUTPUT0"
    data_type: TYPE_STRING
    dims: [ 1 ]
  }
]

model.py of model

import sys
import json

sys.path.append('../../')
import triton_python_backend_utils as pb_utils
import numpy as np


class TritonPythonModel:
    """This model always returns the input that it has received.
    """

    def initialize(self, args):
        self.model_config = json.loads(args['model_config'])

    def execute(self, requests):
        """ This function is called on inference request.
        """
        responses = []
        for request in requests:
            in_0 = pb_utils.get_input_tensor_by_name(request, "INPUT0")
            out_tensor_0 = pb_utils.Tensor("OUTPUT0", in_0.as_numpy().astype(np.object_))
            print(f"INPUT VALUE: {in_0.as_numpy()[0].decode()}")
            responses.append(pb_utils.InferenceResponse([out_tensor_0]))
        return responses 

example client script

import requests

URL = "http://localhost:8120/v2/models/string_test/infer"


def main():
    data = {
        "name": "string_test",
        "inputs": [
            {
                "name": "INPUT0",
                "shape": [1],
                "datatype": "BYTES",
                "data": ["Hi!"]
            }
        ]
    }
    res = requests.post(URL, json=data)
    print(res.json())
    return


if __name__ == "__main__":
    main()

I have tried various things - from changing CUDA and Triton versions, changing Nvidia package versions, etc. None seems to work.

@ohad83
Copy link

ohad83 commented Jul 1, 2024

Something similar happened to me as well, we figured out a few things:

First and most important, it seems the problem is the protobuf library version. Downgrading python's protobuf library from 5.27.2 to 5.27.1 fixed it.

Second, the problem seems to be parsing a string or bytes length. It's transported as 1 byte of length then the buffer. However, instead of expanding the byte to 4 bytes by zero-extending, it repeats the byte 4 times. For example, in your example, the data is 3 bytes long (Hi!), so instead of parsing the size as 0x00000003 it's parsed as 0x03030303 (the 50529027 that appears in your error).

I'm not sure if the problem is general in the protobuf library or specifically with how triton uses it, but since I've only seen the problem in triton I tend to think it's in the library usage.

@adisabolic
Copy link
Author

@ohad83 Thank you for your help - but unfortunately downgrading protobuf didn't work for me. I have tried several older versions of protobuf with no success.

Although it is not the best solution, I did manage to get it working - by downgrading to Triton version 2.42 (docker container version 24.01), that was the newest Triton version which worked for me.

@SunXuan90
Copy link

SunXuan90 commented Jul 2, 2024

I used 24.05 with conda packed environments for python models. At first everything was fine, then I repacked the environment and caused this problem. I fixed it by using pip inside container to install packages and stick to those versions when packing. Something definitely broke with recent updates. I haven't tried 24.06 yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants