Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory resource management #2

Open
aisosalo opened this issue May 7, 2024 · 0 comments
Open

Memory resource management #2

aisosalo opened this issue May 7, 2024 · 0 comments

Comments

@aisosalo
Copy link

aisosalo commented May 7, 2024

When I recently run the model inference on a low-resource device, there were some memory issues with the largest image stacks. I solved the issue by introducing a gc.collect() to the run_model() function and torch.cuda.empty_cache() to the run_single_model() and del model command when running the pipeline for an "ensemble" of models. These were very random actions to alleviate the memory issue, but perhaps they can help someone else struggling with similar problem, so I post them here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant