-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA error: out of memory CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. #23
Comments
CUDA_LAUNCH_BLOCKING=1 python python infer_finetuning.py |
CUDA error: out of memory |
put all log , let me kown what happen. |
BloomConfig { None |
try use cpu load , it is just CUDA error: out of memory. |
Yes. But as I said above, I modified torch.load and it worked on both cpu and gpu |
Error occurs when I use below command
I have specified the gpu or cpu in the parameter to load the model,
Error resolution
The text was updated successfully, but these errors were encountered: