Replies: 3 comments 2 replies
-
So, it shouldn't be that hard. I'm not sure on which axis the transpose is applied on, but if it is only on the last axis. You should be able to use the template of the convolution: https://github.com/webonnx/wonnx/blob/master/wonnx/templates/pool/conv.wgsl , add the transpose builtin function https://www.w3.org/TR/WGSL/#matrix-builtin-functions, before applying the product sum and you should be good. If the axis is a different axis than the last one. You only have to redefine the chunk size based on the transposition formula. If you don't already now, each array of n dimension is represented by one array concatenating each dimension. Don't hesitate to ask more questions 👍 |
Beta Was this translation helpful? Give feedback.
-
Note that this might not be the most optimised way of implementing it. If this operator is really used a lot in your model you might look at other conv template https://github.com/webonnx/wonnx/tree/master/wonnx/templates/pool where we use group_size, vectorisation and padding to reduce the computation time. |
Beta Was this translation helpful? Give feedback.
-
After a week of studying wonnx and trying to run my model, I find that the only missing piece is ConvTranspose. I'm really neither familiar with the pytorch cuda implementation nor wgsl. So is it easy to implement this op? Any instructions or information / learning material is appreciated. Thanks
Beta Was this translation helpful? Give feedback.
All reactions