-
Notifications
You must be signed in to change notification settings - Fork 26.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB. GPU 0 has a total capacty of 12.00 GiB of which 4.65 GiB is free #16114
Comments
try disabling extension and keep only one model in the models folder |
I reinstalled webui, git, python and all libraries from scratch it does not help. There is no other extensions except inbuilt, also i tried to disable them. Nothing still works. |
It seems, something happened with memory and swap file in windows. I increased size manually and now it works fine, but uses in peak Full 12gb VRAM and 40gb RAM+Swap file(16RAM+24swap). Is it ok? |
no it not okay
since u r using the the above arg this error could have happened please try without and tell me the results my another suggestion is that if you are not using a venv then use it
the libraries should be installed the venv |
Yeah but i close any other application when i work with SD because i have only 16gb ram. So 80-90% of total mem usage is on SD
I changed --opt-sdp-attention to --xformers seems like not much changed in memory use. Still it consumes up to 40gb total memory usage when i try to generate image 1024x1024 with XL model. Very weird. Before i tried to install Ultimate SD Upscale everything was ok. Maybe that extension broke something in my Windows?
Hmm i think i do, because all libraries installed in that directory. |
i too use ultimate sd upscale but i didn't get any issues like this with smaller model maybe it is possible with xl model |
Any XL model. For example Pony XL. I don't use any extensions now except built in, so may be something broke in windows but don't know what. |
Same issue, every time I try to load an SDXL or even Pony Checkpoint I get the CUDA out of Memory error. I have RTX 3060 12GB gpu. Also, I should clarify, everything was working fine on Automatic1111. same pc and gpu, the issue started after I installed the Tensorrt extension which did not work and I ended up deleting the extension. But after that I am unable to load SDXL or Pony Models/Checkpoints. |
I found a solution works for me. It seems that webui doesn't release checkpoints while switching/reloading the model. Webui pushes two checkpoints into virtual memory and my virtual memory isn't enough so it raises an error. So my solution is, 1) increase the virtual memory, 2) goto |
Checklist
What happened?
I used 1.8.0 version everything were ok. Then o tried to install Ultimate SD Upscaler and my webui broke. After that i removed everything from PC, include GIT, Python, all cache, made clean webui install but it does not help.
1.5 models load normal, but when i try to load ANY XL model i got this:
Please help me i don't know why this happen, i tried 1.8.0 again but got same error. 1.5 works pretty fine somehow but xl just don't load.
Steps to reproduce the problem
Error
What should have happened?
Work like before
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
Ryzen 2600
Geforce 3060 12gb
Windows 10
16gb Ram
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: