Skip to content
This repository has been archived by the owner on Apr 18, 2024. It is now read-only.

Asynchronous Grab - FrameStatus Incomplete after about 10 hours of running #174

Open
sravan-greyscaleai opened this issue Nov 17, 2023 · 6 comments

Comments

@sravan-greyscaleai
Copy link

Hello,

I'm trying to write a script based off the asynchronous_grab_opencv.py example. I'm having to process the captured images (some minor image processing plus a DL model inference if certain conditions are met in the image processing portion) in the call() function of the script

Now, after I run the code for some time (about 10-11 hours), I'm getting back incomplete frames (FrameStatus.Incomplete). Until then, the code works as predicted. I've also run some profiling with htop and I've discovered that the single core CPU usage keeps increasing over time until the breaking point, where it crosses 100% CPU usage.

Any suggestions on how to fix this / what I may be doing wrong?

@BernardoLuck
Copy link

hello,
which camera are you using? Can you describe the HW that you are using?

@sravan-greyscaleai
Copy link
Author

The camera is the Alvium U-319c USB camera. I verified it was using USB 3.

Regarding the hardware -

  • CPU I'm using is Intel® Core™ i7-9700E CPU @ 2.60GHz × 8
  • Memory - 32 GB
  • GPU - Nvidia A4000

Some additional notes -

  • I need the camera to run at 20+ FPS, with a very low exposure time (590 us) as I need to image fast moving items.
  • I've modified this line in the script to use buffer count of 40 instead of 5.
  • Most of my processing is happening in the call() function. Please advise if I need to move it out of there.

@Teresa-AlliedVision
Copy link

Teresa-AlliedVision commented Nov 21, 2023

Have you modified the script otherwise? Are you using a lot of deep copies or similar? What kind of processing are you doing on the frames?
If you need a queue of 40 frames, then there must be a lot of intense processing before the frames are put back into the queue?
Python Multithreading only uses one core and is ususally the preferred option for IO bound tasks, like getting frames from the camera.
Multiprocessing would be an option if your frame processing is very calculation heavy, unfortunately the frame objects can't be pickled, so you would need to find a workaround to pass the data of the frames to the new processes. To be clear, you would need to start a new process for each frame after getting it through the API. The asynchronous acquisition underneath would need to stay threading based, because that is what the Python API is using.

Edit: Regarding the call function, that is where the processing should happen, because the frame object needs to be put back into the queue. You can do a deep copy of the frame and pass that onto a different thread/queue/process(?), but you need to make sure that the frame object from the queue is always returned to the queue.

If you notice the RAM increasing a lot as well, it might be good to let the garbage collector run after a while (meaning the automated gc doesn't run often enough).

Last but not least, consider using either C or C++ API, for less CPU load.

@sravan-greyscaleai
Copy link
Author

I don't see a noticeable change in RAM usage. I don't believe there's a leak of some sort that's causing this problem. The fraemstatus changes to incomplete when the CPU usage goes beyond 100% (single-core).

Just for some clarity - the major steps in the call function are as follows -

  1. Resize the image to a 1/3 of its original dimensions (1/3 H * 1/3 W).
  2. Perform background subtraction to obtain a foreground mask (using OpenCV's default background subtraction method).
  3. Some light image processing on the mask (dividing into 3 segments and shortlisting the frame if it contains enough foreground in all 3 segments).
  4. Inference on the shortlisted frames. Here, I am making a copy of the frame for inference purposes - Note: I've just changed this to just use the image itself as it is the last step in the process, and I don't need to make a copy. I'll share some details once the re-run is done and mention if that was sufficient.

Regarding the number of frames in the queue - How can I determine how many I'll need in the buffer? At this time, I will need to perform frame capture at 20+ FPS.

@Teresa-AlliedVision
Copy link

The number of frames in the buffer is a matter of trial and error and decided by how fast your program is working through the frames and if it is always processing them at the same speed. If the program is sometimes a bit slower in processing the images, then the queue is necessary to buffer the frames before processing.
Can you post your code here, so we can test it? Alternatively start a support ticket through our website link this issue, to send us the source code:
https://www.alliedvision.com/en/about-us/contact-us/technical-support-repair-/-rma/

@sravan-greyscaleai
Copy link
Author

Thanks, I have posted a ticket on the website and shared the code and all the details discussed here. I'll update if I find a solution soon. I'm also attempting a run with a buffer of 60 frames. Will update here if I find a solution that works.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants