We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hey,
I was succesful in trying to run the .onnx model through opencv and python with a input size of (3, 640, 480) which becomes (1, 3, 480, 640) after the
print(img.shape) # (3, 640, 480) blob = self.net.blobFromImage(...) self.net.setInput(blob) print(blob.shape) # (1,3,480,640)
But I wish instead to run the model with onnxruntime inference. Which instead requires a input of size (10, 3, 32, 32)? Am I supposed to
The text was updated successfully, but these errors were encountered:
Bump for this question, I cannot understand that input size as well. 32x32 makes no sense in case of image object recognition, same as batch = 10
Sorry, something went wrong.
No branches or pull requests
Hey,
I was succesful in trying to run the .onnx model through opencv and python with a input size of (3, 640, 480) which becomes (1, 3, 480, 640) after the
print(img.shape) # (3, 640, 480) blob = self.net.blobFromImage(...) self.net.setInput(blob) print(blob.shape) # (1,3,480,640)
But I wish instead to run the model with onnxruntime inference. Which instead requires a input of size (10, 3, 32, 32)? Am I supposed to
The text was updated successfully, but these errors were encountered: