Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for Docker. Container can easily be started with docker compose #16688

Open
wants to merge 3 commits into
base: dev
Choose a base branch
from

Conversation

ShadowCrafter011
Copy link

@ShadowCrafter011 ShadowCrafter011 commented Nov 27, 2024

Description

Added support for containerization with Docker. To run webui you need to have Docker installed, clone the repository and execute docker compose up in the root directory of the repository. The first startup installs all the dependencies and subsequent startups are much quicker. The webui is exposed at localhost:7860

Checklist:

w-e-w
w-e-w previously requested changes Nov 27, 2024
Copy link
Collaborator

@w-e-w w-e-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with Docker so my comments are only about the other changes

modules/cmd_args.py Outdated Show resolved Hide resolved
webui-user.sh Outdated Show resolved Hide resolved
launch.py Outdated Show resolved Hide resolved
@@ -0,0 +1,19 @@
FROM python:3.10-bookworm

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not just python:3.10 ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe another alternative is nvcr.io/nvidia/pytorch:24.12-py3 which torch and cuda embeded.

@panpan0000
Copy link

panpan0000 commented Dec 19, 2024

I'm testing this patch

(0)
I've modified @ShadowCrafter011 's Dockerfile a bit as below ,

- FROM python:3.10-bookworm
+ FROM python:3.10
...

- RUN ./webui.sh   --skip-torch-cuda-test  --prepare-environment-only
+ RUN ./webui.sh   --skip-torch-cuda-test  --exit

- CMD [ "./webui.sh", "--skip-prepare-environment"]
+ CMD [ "./webui.sh", "--skip-prepare-environment" , "--listen"]

(1)
Now just run container from the image logging as below, now seems ok:

docker run  -p 7860:7860  $image
.....
################################################################
Launching launch.py...
################################################################
glibc version is 2.36
Check TCMalloc: libtcmalloc_minimal.so.4
./webui.sh: line 258: bc: command not found
./webui.sh: line 258: [: -eq: unary operator expected
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
Launching Web UI with arguments: --skip-prepare-environment

/webui/venv/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to /webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors

  4%|█████▉                                                                                                                                     | 174M/3.97G [02:54<59:12, 1.15MB/
.....

Loading weights [cc6cb27103] from /webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 9.9s (import torch: 4.7s, import gradio: 1.2s, setup paths: 0.9s, initialize shared: 0.1s, other imports: 0.6s, load scripts: 0.9s, create ui: 0.9s, gradio launch: 0.5s).
Creating model from config: /webui/configs/v1-inference.yaml

image

Guys, you can move on.

@panpan0000
Copy link

panpan0000 commented Dec 22, 2024

Hi, @ShadowCrafter011 , I've created another PR #16737 to push the progress , since we do need a image now :-) which we together are co-author, can you please double check ?
image

and @w-e-w , your comments are all addressed there.Thank you

@ShadowCrafter011
Copy link
Author

@panpan0000 I've made some changes myself because using the webui I realised it needed volumes for configs, extensions and embeddings so that they are persisted.

@w-e-w I've pushed a commit to address the changes you requested.

source: ./embeddings
target: /webui/embeddings
- type: bind
source: ./configs
Copy link

@panpan0000 panpan0000 Dec 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there're already several yaml files under configs folder, did you mean user can customize their own config to override the default ones ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I've gathered that's where the customizations made in the webui are persisted

@silveroxides
Copy link
Contributor

There is already a docker image that have been maintained for over 1½ years.
If there is anything in the main repo that causes these not to work then I understand if you request some changes to make it compatible again.
But as far as I can see there is no issue with the current state of it. In fact it was updated less than 24 hours ago.
universonic/stable-diffusion-webui at the docker hub

@w-e-w w-e-w dismissed their stale review December 23, 2024 15:14

the issue I mentioned has been resolved

@panpan0000
Copy link

panpan0000 commented Dec 24, 2024

universonic/stable-diffusion-webui at the docker hub

Hi, @silveroxides amazing, glad to see that .

I found docker image's source code here , am I right ? https://github.com/universonic/docker-stable-diffusion-webui

Per my understanding for open source community best practice :
(1) That code(Dockerfile..etc) can merge into this repo, as part of manifest of AUTOMATIC1111/stable-diffusion-webui, which universonic's work can be adopted by more people. putting those Dockerfile/K8S yaml in code repo is a common practice.

(2) we should provide automatic Github CI Action ,auto build new docker images tag when AUTOMATIC1111/stable-diffusion-webui being released (just like https://github.com/universonic/docker-stable-diffusion-webui/tree/main/.github/workflows) ,
just like what I mention in #16738, the images can be more official if the image can be download in repo name as :https://hub.docker.com/r/automatic1111/stable-diffusion-webui. what do you think @w-e-w

Anybody can cover that effort ? I think people in this thread are willing to.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants