-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[Unsqueeze_93...Softmax_2088]} #1917
Comments
@Xinchengzelin , the |
Because the model deployment environment is 8.2, so I couldn't change it. |
I find the release notes:
Unfortunately, my problem seems to with it, Could I have the solutions to solve it? the error happens in C++ code when |
[04/13/2022-11:48:34] [W] [TRT] Skipping tactic 0 due to Myelin error: Copy operation "concat" has 513 inputs.
[04/13/2022-11:48:34] [E] Error[10]: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[MPS_VAR_3/strided_slice_1__843:0[Constant]...strided_slice_8__923]}.)
[04/13/2022-11:48:34] [E] Error[2]: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
[04/13/2022-11:48:34] [E] Engine could not be created from network
[04/13/2022-11:48:34] [E] Building engine failed
[04/13/2022-11:48:34] [E] Failed to create engine from model.
[04/13/2022-11:48:34] [E] Engine set up failed similar problem here, can it be solved by any other method other than using trt8.4 |
Yes we changed the default workspace to max GPU memory start from 8.4. So no need to set workspace memory in 8.4. |
@handoku , sorry, better to upgrade TRT8.4 |
@ttyio Yes, I serialize cuda engine using trtexec and load it from C++ |
@Xinchengzelin , trtexec also support --saveEngine --loadEngine, did you also hit failure when using trtexec? If not you can check the trtexec source to see what's difference between yours and trtexec. |
@ttyio Thank you very much! I could use |
Yes @Xinchengzelin , the mainly code in 2 dirs |
@ttyio hava just tried with trt 8.4.0.6, still get same error |
Is there any way to prevent myelin from doing this op fusion? It seems that myelin fused some node into a single one,however, can't find a corresponding implementation to run it. |
@handoku , we cannot disable myelin. |
@ttyio hello, I hava already set workspace=12Gb on T4. The model can be found here(model.onnx), thanks for looking into this. |
Thanks @handoku , internal issue created to track the failure. |
@handoku , your issue is fixed in 8.4 GA, please upgrade 8.4 GA when we release the binary in https://developer.nvidia.com/tensorrt, thanks! |
Thanks for your advices, I compared and changed my code with trtexec code, now it works. Thank you very much |
@ttyio thanks,and when will the GA version be released |
@handoku , GA should be available in early June, thanks! |
I have the similar problem, but fixed by adding /path/to/TensorRT-8.2.4.2/lib to LD_LIBRARY_PATH |
Hi, I meet the similar problem. And I have tried "--workspace=32", but the problem still occur.
|
Hi, I meet similar problem
UPD: When I run |
CUDA error 2 allocating 4362077693-byte buffer: |
I use trtexec (TensorRT 8.2.4.2 GA CUDA11.4) to convert my onnx model to trt engine, It show the error below. I tried TensorRT8.4EA Cuda11.5, the transform is OK. and I check the support operator in 8.2GA includes
Unsequeeze Softmax
, and it's same with that of 8.4EA, So I don't know why this error happens?The text was updated successfully, but these errors were encountered: