You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I recently run the model inference on a low-resource device, there were some memory issues with the largest image stacks. I solved the issue by introducing a gc.collect() to the run_model() function and torch.cuda.empty_cache() to the run_single_model() and del model command when running the pipeline for an "ensemble" of models. These were very random actions to alleviate the memory issue, but perhaps they can help someone else struggling with similar problem, so I post them here.
The text was updated successfully, but these errors were encountered:
When I recently run the model inference on a low-resource device, there were some memory issues with the largest image stacks. I solved the issue by introducing a
gc.collect()
to therun_model()
function andtorch.cuda.empty_cache()
to therun_single_model()
anddel model
command when running the pipeline for an "ensemble" of models. These were very random actions to alleviate the memory issue, but perhaps they can help someone else struggling with similar problem, so I post them here.The text was updated successfully, but these errors were encountered: