Skip to content
This repository has been archived by the owner on May 28, 2024. It is now read-only.

Custom Op written in C API compilation Issue #112

Open
YogaVicky opened this issue Jun 7, 2022 · 3 comments
Open

Custom Op written in C API compilation Issue #112

YogaVicky opened this issue Jun 7, 2022 · 3 comments

Comments

@YogaVicky
Copy link

Hi there!
Is there any example where Custom OP written using C API was compiled successfully and executed after tf_load_library()?
I have referred the entire kernels.h ,ops.h and c_api.h files of the official github tf repo but I am not able to figure it out.
Could someone give the command to execute a C API custom op file along with the code?

Thanks,
Yoga

@Rashed-MM
Copy link

Rashed-MM commented Dec 28, 2022

@YogaVicky

Sure! Here is an example of a custom op written using the C API that can be compiled and executed after being loaded with tf.load_op_library():

#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"

using namespace tensorflow;

REGISTER_OP("MyAdd")
    .Input("x: int32")
    .Input("y: int32")
    .Output("z: int32")
    .SetShapeFn([](shape_inference::InferenceContext* c) {
      c->set_output(0, c->input(0));
      return Status::OK();
    });

class MyAddOp : public OpKernel {
 public:
  explicit MyAddOp(OpKernelConstruction* context) : OpKernel(context) {}

  void Compute(OpKernelContext* context) override {
    // Grab the input tensors
    const Tensor& x_tensor = context->input(0);
    const Tensor& y_tensor = context->input(1);

    // Create an output tensor
    Tensor* z_tensor = NULL;
    OP_REQUIRES_OK(context, context->allocate_output(0, x_tensor.shape(), &z_tensor));

    // Do the computation.
    int64 num_elements = x_tensor.NumElements();
    auto x_flat = x_tensor.flat<int32>();
    auto y_flat = y_tensor.flat<int32>();
    auto z_flat = z_tensor->flat<int32>();
    for (int64 i = 0; i < num_elements; ++i) {
      z_flat(i) = x_flat(i) + y_flat(i);
    }
  }
};

REGISTER_KERNEL_BUILDER(Name("MyAdd").Device(DEVICE_CPU), MyAddOp);`

@Rashed-MM
Copy link

Rashed-MM commented Dec 28, 2022

To compile this custom op, you will need to build it as a shared library. One way to do this is to use the TensorFlow C++ library and build the op as part of a TensorFlow binary. Here is an example of how you might do this using bazel:

Create a file named BUILD in the same directory as your custom op code, with the following contents:

load("@org_tensorflow//tensorflow:tensorflow.bzl", "tf_cc_binary")

tf_cc_binary(
    name = "my_add_op",
    srcs = ["my_add_op.cc"],
    deps = ["@org_tensorflow//tensorflow:tensorflow"],
)

@Rashed-MM
Copy link

Rashed-MM commented Dec 28, 2022

Run the following command to build the shared library:

bazel build -c opt //path/to/custom/op:my_add_op

This will build a shared library named libmy_add_op.so in the bazel-bin directory.

To execute the custom op after loading it with `tf.load_

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants