Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

n_jobs=-1 in GridSearchCV may cause memory leak #14

Open
albertnieto opened this issue Jun 23, 2024 · 2 comments
Open

n_jobs=-1 in GridSearchCV may cause memory leak #14

albertnieto opened this issue Jun 23, 2024 · 2 comments

Comments

@albertnieto
Copy link

I was getting a memory leak when using run_hyperparameter_search.py.

As stated here., removing n_jobs=-1 fixed the issue.

@josephbowles
Copy link
Collaborator

Hey Albert. For what model and how many qubits and available RAM? And do you mean there was a leak in the sense the the memory usage kept increasing as more models were trained?

@albertnieto
Copy link
Author

Hey Albert. For what model and how many qubits and available RAM? And do you mean there was a leak in the sense the the memory usage kept increasing as more models were trained?

Hi Joseph. Actually any model and any qubit count caused the leak. The process would allocate all the memory possible in the GPU causing errors in execution due lack of memory. It didn't increase as models were trained, instead the memory just was fully allocated.

The fact that this issue only happened to me makes me think it could be related to my Python/sckit-learn version, since it's definetely caused by sckit-learn.

Anyway I think it would be useful to open this issue for others to know, in case it happens to somebody too. 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants