Error during PyTorch model deployment in Triton Server: PytorchStreamReader failed locating file constants.pkl #7454
Unanswered
Manishthakur2503
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When attempting to load a PyTorch model (model.pth) in Triton Server (version 23.01-py3), the following error occurs:
E0718 11:47:30.586692 1 model_lifecycle.cc:597] failed to load 'CombinedFaceAudioModel' version 1: Internal: failed to load model 'CombinedFaceAudioModel': PytorchStreamReader failed locating file constants.pkl: file not found
Steps to Reproduce:
Docker Environment: Windows 10
Model Repository Structure:
docker run --rm -p8000:8000 -p8001:8001 -p8002:8002 -v C:\Users\manis\Desktop\Triton_Emotion_Model\model_repository:/model_repository nvcr.io/nvidia/tritonserver:23.01-py3 tritonserver --model-repository=/model_repository
Expected Behavior:
The model CombinedFaceAudioModel should load successfully without any errors.
Actual Behavior:
The following error is reported: Internal: failed to load model 'CombinedFaceAudioModel': PytorchStreamReader failed locating file constants.pkl: file not found and if you want to check the full error message then please check the attacher ErrorMessage.txt file
ErrorMessage.txt
Additional Context:
It appears that Triton is expecting the file constants.pkl, which is referenced by the PyTorch model during loading. However, this file is not present in the model directory. Is there a specific requirement or configuration needed to handle constants.pkl in PyTorch models within Triton Server? Or is there anything wring with the configurations I am using?
Thanks
Manish Thakur
Beta Was this translation helpful? Give feedback.
All reactions