You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am creating a cpp tensor in two different ways. The first method is by using cppflow::decode_jpeg() to create a tensor on GPU memory, the second method is by creating a tensor manually on system memory. The first method causes memory leak, where as the second method does not. The code is as follow:
#define USE_TENSORFLOW_API_GPU_API
#ifdef USE_TENSORFLOW_API_GPU_API
input = cppflow::decode_jpeg(req->image().content());
input = cppflow::expand_dims(input, 0);
#else
std::vector<uint8_t> data;
for(int i=0; i < h; i++){
for(int j=0; j < w; j++){
QColor color = image.pixelColor(j,i);
int red, green, blue;
green = color.green();
blue = color.blue();
red = color.red();
data.push_back(red);
data.push_back(green);
data.push_back(blue);
}
}
input = cppflow::tensor(data, {1, h, w, 3});
#endif
I am not sure what causes the memory leak, would someone comment on this issue?
I read from this thread that GPU memory (of CUDA toolkit) does not get de-allocated until the process terminated. Please advise if you know anything about this. thanks
The text was updated successfully, but these errors were encountered:
@resetpointer Could you please provide the minimal compilable code so that we can reproduce the leak? It would be better if you can provide the jpeg image. In addition, which version of the TF C API did you use?
Please also clarify that the leak happens on CPU, GPU or both. I am a bit confused.
Hi, I am creating a cpp tensor in two different ways. The first method is by using cppflow::decode_jpeg() to create a tensor on GPU memory, the second method is by creating a tensor manually on system memory. The first method causes memory leak, where as the second method does not. The code is as follow:
I am not sure what causes the memory leak, would someone comment on this issue?
I read from this thread that GPU memory (of CUDA toolkit) does not get de-allocated until the process terminated. Please advise if you know anything about this. thanks
The text was updated successfully, but these errors were encountered: