Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch from torch.cuda.amp.custom_fwd to torch.amp.custom_fwd(device=...) #5684

Closed
wants to merge 2 commits into from

Conversation

loadams
Copy link
Contributor

@loadams loadams commented Jun 18, 2024

Fixes: #5682

@alexkirp
Copy link

xxx@As-MacBook-Pro ComfyUI % python3.12 main.py
Total VRAM 16384 MB, total RAM 16384 MB
pytorch version: 2.5.0.dev20240720
Set vram state to: SHARED
Device: mps
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[Prompt Server] web root: /Users/xxxstable_diffusion/ComfyUI/web
/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/kornia/feature/lightglue.py:44: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

Import times for custom nodes:
0.0 seconds: /Users/akirpitchenko/stable_diffusion/ComfyUI/custom_nodes/websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188

@loadams
Copy link
Contributor Author

loadams commented Jul 22, 2024

xxx@As-MacBook-Pro ComfyUI % python3.12 main.py Total VRAM 16384 MB, total RAM 16384 MB pytorch version: 2.5.0.dev20240720 Set vram state to: SHARED Device: mps Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention [Prompt Server] web root: /Users/xxxstable_diffusion/ComfyUI/web /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/kornia/feature/lightglue.py:44: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

Import times for custom nodes: 0.0 seconds: /Users/akirpitchenko/stable_diffusion/ComfyUI/custom_nodes/websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188

Hi @alexkirp - are you letting us know you hit the same issue on your mac? If so, thanks. I haven't had time to complete this PR yet, but we are working on it before it is fully deprecated.

@loadams
Copy link
Contributor Author

loadams commented Jul 31, 2024

Closing in favor of #5811

@loadams loadams closed this Jul 31, 2024
@loadams loadams deleted the loadams/torch-future-warnings branch August 19, 2024 20:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Logs full of FutureWarning when training with nightly PyTorch
2 participants