diff --git a/docs/backends/ncnn.md b/docs/backends/ncnn.md index bf644cc4f..1ec3f0286 100644 --- a/docs/backends/ncnn.md +++ b/docs/backends/ncnn.md @@ -24,6 +24,7 @@ You should ensure your gcc satisfies `gcc >= 6`. cmake -DNCNN_VULKAN=ON -DNCNN_SYSTEM_GLSLANG=ON -DNCNN_BUILD_EXAMPLES=ON -DNCNN_PYTHON=ON -DNCNN_BUILD_TOOLS=ON -DNCNN_BUILD_BENCHMARK=ON -DNCNN_BUILD_TESTS=ON .. make install ``` + - Install pyncnn module ```bash cd ncnn/python @@ -42,7 +43,7 @@ cmake -DBUILD_NCNN_OPS=ON .. make -j$(nproc) ``` -If you haven't installed NCNN in default path, please add `-DNCNN_DIR` flag in cmake. +If you haven't installed NCNN in the default path, please add `-DNCNN_DIR` flag in cmake. ```bash cmake -DBUILD_NCNN_OPS=ON -DNCNN_DIR=${NCNN_DIR} .. @@ -50,16 +51,19 @@ If you haven't installed NCNN in default path, please add `-DNCNN_DIR` flag in c ``` ### Convert model -- This follows the tutorial on [How to convert model](tutorials/how_to_convert_model.md). + +- This follows the tutorial on [How to convert model](../tutorials/how_to_convert_model.md). - The converted model has two files: `.param` and `.bin`, as model structure file and weight file respectively. ### FAQs + 1. When running ncnn models for inference with custom ops, it fails and shows the error message like: - ``` + ```bash TypeError: register mm custom layers(): incompatible function arguments. The following argument types are supported: 1.(ar0: ncnn:Net) -> int Invoked with: ``` + This is because of the failure to bind ncnn C++ library to pyncnn. You should build pyncnn from C++ ncnn source code, but not by `pip install` diff --git a/docs/backends/onnxruntime.md b/docs/backends/onnxruntime.md index 483e27ac7..c13453cd9 100644 --- a/docs/backends/onnxruntime.md +++ b/docs/backends/onnxruntime.md @@ -1,3 +1,79 @@ ## ONNX Runtime Support +### Introduction of ONNX Runtime + +**ONNX Runtime** is a cross-platform inferencing and training accelerator compatible with many popular ML/DNN frameworks. Check its [github](https://github.com/microsoft/onnxruntime) for more information. + ### Installation + +*Please note that only **onnxruntime>=1.8.1** of CPU version on Linux platform is supported by now.* + +- Install ONNX Runtime python package + +```bash +pip install onnxruntime==1.8.1 +``` + +### Build custom ops + +#### Prerequisite + +- Download `onnxruntime-linux` from ONNX Runtime [releases](https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1), extract it, expose `ONNXRUNTIME_DIR` and finally add the lib path to `LD_LIBRARY_PATH` as below: + +```bash +wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz + +tar -zxvf onnxruntime-linux-x64-1.8.1.tgz +cd onnxruntime-linux-x64-1.8.1 +export ONNXRUNTIME_DIR=$(pwd) +export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH +``` + +#### Build on Linux + +```bash +cd ${MMDEPLOY_DIR} # To MMDeploy root directory +mkdir build +cd build +cmake -DBUILD_ONNXRUNTIME_OPS=ON -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} .. +make -j10 +``` + +### How to convert a model + +- You could follow the instructions of tutorial [How to convert model](../tutorials/how_to_convert_model.md) + +### List of supported custom ops + +| Operator | CPU | GPU | MMDeploy Releases | +| :----------------------------------------------------: | :---: | :---: | :-----------: | +| [RoIAlign](../ops/onnxruntime.md#roialign) | Y | N | master | +| [grid_sampler](../ops/onnxruntime.md#grid_sampler) | Y | N | master | +| [MMCVModulatedDeformConv2d](../ops/onnxruntime.md#mmcvmodulateddeformconv2d) | Y | N | master | + +### How to add a new custom op + +#### Reminder + +- The custom operator is not included in [supported operator list](https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md) in ONNX Runtime. +- The custom operator should be able to be exported to ONNX. + +#### Main procedures + +Take custom operator `roi_align` for example. + +1. Create a `roi_align` directory in ONNX Runtime source directory `backend_ops/onnxruntime/` +2. Add header and source file into `roi_align` directory `backend_ops/onnxruntime/roi_align/` +3. Add unit test into `tests/test_ops/test_ops.py` + Check [here](../../tests/test_ops/test_ops.py) for examples. + +**Finally, welcome to send us PR of adding custom operators for ONNX Runtime in MMDeploy.** :nerd_face: + +### FAQs + +- None + +### References + +- [How to export Pytorch model with custom op to ONNX and run it in ONNX Runtime](https://github.com/onnx/tutorials/blob/master/PyTorchCustomOperator/README.md) +- [How to add a custom operator/kernel in ONNX Runtime](https://github.com/microsoft/onnxruntime/blob/master/docs/AddingCustomOp.md) diff --git a/docs/ops/onnxruntime.md b/docs/ops/onnxruntime.md index 98f6fa489..2e4d741e0 100644 --- a/docs/ops/onnxruntime.md +++ b/docs/ops/onnxruntime.md @@ -1,3 +1,138 @@ ## ONNX Runtime Ops -### Installation + + +- [ONNX Runtime Ops](#onnx-runtime-ops) + - [RoIAlign](#roialign) + - [Description](#description) + - [Parameters](#parameters) + - [Inputs](#inputs) + - [Outputs](#outputs) + - [Type Constraints](#type-constraints) + - [grid_sampler](#grid_sampler) + - [Description](#description-1) + - [Parameters](#parameters-1) + - [Inputs](#inputs-1) + - [Outputs](#outputs-1) + - [Type Constraints](#type-constraints-1) + - [MMCVModulatedDeformConv2d](#mmcvmodulateddeformconv2d) + - [Description](#description-2) + - [Parameters](#parameters-2) + - [Inputs](#inputs-2) + - [Outputs](#outputs-2) + - [Type Constraints](#type-constraints-2) + + + +### RoIAlign + +#### Description + +Perform RoIAlign on output feature, used in bbox_head of most two-stage detectors. + +#### Parameters + +| Type | Parameter | Description | +| ------- | ---------------- | ------------------------------------------------------------------------------------------------------------- | +| `int` | `output_height` | height of output roi | +| `int` | `output_width` | width of output roi | +| `float` | `spatial_scale` | used to scale the input boxes | +| `int` | `sampling_ratio` | number of input samples to take for each output sample. `0` means to take samples densely for current models. | +| `str` | `mode` | pooling mode in each bin. `avg` or `max` | +| `int` | `aligned` | If `aligned=0`, use the legacy implementation in MMDetection. Else, align the results more perfectly. | + +#### Inputs + +
+
input: T
+
Input feature map; 4D tensor of shape (N, C, H, W), where N is the batch size, C is the numbers of channels, H and W are the height and width of the data.
+
rois: T
+
RoIs (Regions of Interest) to pool over; 2-D tensor of shape (num_rois, 5) given as [[batch_index, x1, y1, x2, y2], ...]. The RoIs' coordinates are the coordinate system of input.
+
+ +#### Outputs + +
+
feat: T
+
RoI pooled output, 4-D tensor of shape (num_rois, C, output_height, output_width). The r-th batch element feat[r-1] is a pooled feature map corresponding to the r-th RoI RoIs[r-1].
+
+ +#### Type Constraints + +- T:tensor(float32) + +### grid_sampler + +#### Description + +Perform sample from `input` with pixel locations from `grid`. + +#### Parameters + +| Type | Parameter | Description | +| ----- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `int` | `interpolation_mode` | Interpolation mode to calculate output values. (0: `bilinear` , 1: `nearest`) | +| `int` | `padding_mode` | Padding mode for outside grid values. (0: `zeros`, 1: `border`, 2: `reflection`) | +| `int` | `align_corners` | If `align_corners=1`, the extrema (`-1` and `1`) are considered as referring to the center points of the input's corner pixels. If `align_corners=0`, they are instead considered as referring to the corner points of the input's corner pixels, making the sampling more resolution agnostic. | + +#### Inputs + +
+
input: T
+
Input feature; 4-D tensor of shape (N, C, inH, inW), where N is the batch size, C is the numbers of channels, inH and inW are the height and width of the data.
+
grid: T
+
Input offset; 4-D tensor of shape (N, outH, outW, 2), where outH and outW are the height and width of offset and output.
+
+ +#### Outputs + +
+
output: T
+
Output feature; 4-D tensor of shape (N, C, outH, outW).
+
+ +#### Type Constraints + +- T:tensor(float32, Linear) + +### MMCVModulatedDeformConv2d + +#### Description + +Perform Modulated Deformable Convolution on input feature, read [Deformable ConvNets v2: More Deformable, Better Results](https://arxiv.org/abs/1811.11168?from=timeline) for detail. + +#### Parameters + +| Type | Parameter | Description | +| -------------- | ------------------- | ------------------------------------------------------------------------------------- | +| `list of ints` | `stride` | The stride of the convolving kernel. (sH, sW) | +| `list of ints` | `padding` | Paddings on both sides of the input. (padH, padW) | +| `list of ints` | `dilation` | The spacing between kernel elements. (dH, dW) | +| `int` | `deformable_groups` | Groups of deformable offset. | +| `int` | `groups` | Split input into groups. `input_channel` should be divisible by the number of groups. | + +#### Inputs + +
+
inputs[0]: T
+
Input feature; 4-D tensor of shape (N, C, inH, inW), where N is the batch size, C is the number of channels, inH and inW are the height and width of the data.
+
inputs[1]: T
+
Input offset; 4-D tensor of shape (N, deformable_group* 2* kH* kW, outH, outW), where kH and kW are the height and width of weight, outH and outW are the height and width of offset and output.
+
inputs[2]: T
+
Input mask; 4-D tensor of shape (N, deformable_group* kH* kW, outH, outW), where kH and kW are the height and width of weight, outH and outW are the height and width of offset and output.
+
inputs[3]: T
+
Input weight; 4-D tensor of shape (output_channel, input_channel, kH, kW).
+
inputs[4]: T, optional
+
Input bias; 1-D tensor of shape (output_channel).
+
+ +#### Outputs + +
+
outputs[0]: T
+
Output feature; 4-D tensor of shape (N, output_channel, outH, outW).
+
+ +#### Type Constraints + +- T:tensor(float32, Linear)