You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My model has two inputs, one size is (1, 5,256,256), the other size is (1,3,512,512), there are eight outputs, I need to use the trt model to achieve reasoning, and get reasoning results. The relevant code I have looked up so far is for one input, and I want to know how to write code that has multiple inputs and outputs for the model.
Description
My model has two inputs, one size is (1, 5,256,256), the other size is (1,3,512,512), there are eight outputs, I need to use the trt model to achieve reasoning, and get reasoning results. The relevant code I have looked up so far is for one input, and I want to know how to write code that has multiple inputs and outputs for the model.
Mycode
runtime= trt.Runtime(trt.Logger())
inference_time = []
with open('/home/AGX-exte/qry/AGX_EXP/model_oxnn/model.trt','rb') as f,runtime.deserialize_cuda_engine(f.read()) as engine:
inference_time = np.array(inference_time)
np.save('',inference_time)`
Operating System:
linux
Python Version (if applicable):
3.6.9
Relevant Files
Model link:
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
):The text was updated successfully, but these errors were encountered: