Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add model cache after loaded #3605

Closed
wants to merge 15 commits into from
Closed

Conversation

efwfe
Copy link

@efwfe efwfe commented May 30, 2024

Hello, I made a PR to cache model after loaded. save the model instance in memory, that's really will cost some memory here, but dont worry, it will check the memory free both cpu and gpu, and free the memory when it's necessary.
And this is the update version of #3545.

Here is a gif example to show it, and it's really amazing.

Fully tested on openart template workflows. -> https://openart.ai/workflows/templates

530pgpif

@efwfe efwfe force-pushed the master branch 17 times, most recently from 1b1569d to 55048c4 Compare June 5, 2024 08:14
comfy/sd.py Outdated
model_cache.cache_vae(ckpt_path, vae)
if clipvision:
logging.debug(f"cache clipvision of : {ckpt_path}")
model_cache.cache_clipvision(ckpt_path, clipvision)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bug here
model_cache.cache_clipvision needs to be changed to model_cache.cache_clip_vision

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is better to cache clipVision model also within CLIPVisionLoader
When use ip adapter every time will need to load CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
Use model cache can directly copy it each time from cpu memory

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copy that.

@efwfe efwfe requested a review from alboto July 5, 2024 02:09
Add model cache after loaded
@333caowei
Copy link

looks great ! how to use it?

@efwfe
Copy link
Author

efwfe commented Jul 22, 2024

looks great ! how to use it?

You can try with this repo repository https://github.com/efwfe/ComfyUI.git,
The bigger the memory, the better performance.
Thank you for wanting to use.

@Charuru
Copy link

Charuru commented Aug 19, 2024

Hi can you keep your fork up to date please? I want to test this.

@efwfe
Copy link
Author

efwfe commented Aug 20, 2024

Hi can you keep your fork up to date please? I want to test this.

Welcome to test it. It has been updated. Hope to know your feedback. It is recommended to add --low-vram parameter to reduce the memory usage of sdxl models.

@shawnL128Po
Copy link

Sorry for asking a silly question. I couldn't find the definition of the variable 'ckpt_path' in the function 'load_state_dict_guess_config' within the sd.py file, and PyCharm is reporting a compilation error: 'Unresolved reference 'ckpt_path''.

@efwfe
Copy link
Author

efwfe commented Aug 27, 2024

Sorry for asking a silly question. I couldn't find the definition of the variable 'ckpt_path' in the function 'load_state_dict_guess_config' within the sd.py file, and PyCharm is reporting a compilation error: 'Unresolved reference 'ckpt_path''.

Sorry,that's my mistake, the latest version of comfyui remove the parameter of ckpt_path, I have updated the code, you can try it now.

Actually I'm ready to close this PR because it's not really helpful in most cases. So I closed this thanks you guys.

@efwfe efwfe closed this Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants