Replies: 1 comment 1 reply
-
What are your loader arguments? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have an image dataset, and, based on it, I create multiple dataloaders (I sample a different set of indices). At any point of time, only one dataloader is active. Basically, the code looks like this:
I'm confident that there are no leaks (the outcome of each iteration is a number, and I'm sure it's detached from the computational graph). However, I see my RAM usage growing linearly. Is there a way to clean up after the iterations?
gc.collect()
ortorch.cuda.empty_cache()
don't work.I'm running this on google colab. Before, I was using the standard pytorch dataloaders, and there were no memory problems (I had cifar-10 dataset, and i wrapped it in the another dataset which sampled the indices I want).
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions