Skip to content

steelax/runpod-worker-comfy

 
 

Repository files navigation

runpod-worker-comfy

ComfyUI as a serverless API on RunPod

Read our article here: https://blib.la/blog/comfyui-on-runpod

Discord

→ Please also checkout Captain: The AI Platform



Quickstart

Features

Config

Environment Variable Description Default
REFRESH_WORKER When you want to stop the worker after each finished job to have a clean state, see official documentation. false
COMFY_POLLING_INTERVAL_MS Time to wait between poll attempts in milliseconds. 250
COMFY_POLLING_MAX_RETRIES Maximum number of poll attempts. This should be increased the longer your workflow is running. 500
SERVE_API_LOCALLY Enable local API server for development and testing. See Local Testing for more details. disabled

Upload image to AWS S3

This is only needed if you want to upload the generated picture to AWS S3. If you don't configure this, your image will be exported as base64-encoded string.

  • Create a bucket in region of your choice in AWS S3 (BUCKET_ENDPOINT_URL)
  • Create an IAM that has access rights to AWS S3
  • Create an Access-Key (BUCKET_ACCESS_KEY_ID & BUCKET_SECRET_ACCESS_KEY) for that IAM
  • Configure these environment variables for your RunPod worker:
Environment Variable Description Example
BUCKET_ENDPOINT_URL The endpoint URL of your S3 bucket. https://<bucket>.s3.<region>.amazonaws.com
BUCKET_ACCESS_KEY_ID Your AWS access key ID for accessing the S3 bucket. AKIAIOSFODNN7EXAMPLE
BUCKET_SECRET_ACCESS_KEY Your AWS secret access key for accessing the S3 bucket. wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Use the Docker image on RunPod

  • Create a new template by clicking on New Template
  • In the dialog, configure:
    • Template Name: runpod-worker-comfy (it can be anything you want)
    • Template Type: serverless (change template type to "serverless")
    • Container Image: <dockerhub_username>/<repository_name>:tag, in this case: timpietruskyblibla/runpod-worker-comfy:2.1.3 (or dev if you want to have the development release)
    • Container Registry Credentials: You can leave everything as it is, as this repo is public
    • Container Disk: 20 GB
    • Enviroment Variables: Configure S3
  • Click on Save Template
  • Navigate to Serverless > Endpoints and click on New Endpoint
  • In the dialog, configure:
    • Endpoint Name: comfy
    • Select Template: runpod-worker-comfy (or whatever name you gave your template)
    • Active Workers: 0 (whatever makes sense for you)
    • Max Workers: 3 (whatever makes sense for you)
    • Idle Timeout: 5 (you can leave the default)
    • Flash Boot: enabled (doesn't cost more, but provides faster boot of our worker, which is good)
    • Advanced: If you are using a Network Volume, select it under Select Network Volume. Otherwise leave the defaults.
    • Select a GPU that has some availability
    • GPUs/Worker: 1
  • Click deploy
  • Your endpoint will be created, you can click on it to see the dashboard

API specification

The following describes which fields exist when doing requests to the API. We only describe the fields that are sent via input as those are needed by the worker itself. For a full list of fields, please take a look at the official documentation.

JSON Request Body

{
  "input": {
    "workflow": {},
    "images": [
      {
        "name": "example_image_name.png",
        "image": "base64_encoded_string"
      }
    ]
  }
}

Fields

Field Path Type Required Description
input Object Yes The top-level object containing the request data.
input.workflow Object Yes Contains the ComfyUI workflow configuration.
input.images Array No An array of images. Each image will be added into the "input"-folder of ComfyUI and can then be used in the workflow by using it's name

"input.images"

An array of images, where each image should have a different name.

🚨 The request body for a RunPod endpoint is 10 MB for /run and 20 MB for /runsync, so make sure that your input images are not super huge as this will be blocked by RunPod otherwise, see the official documentation

Field Name Type Required Description
name String Yes The name of the image. Please use the same name in your workflow to reference the image.
image String Yes A base64 encoded string of the image.

Interact with your RunPod API

  1. Generate an API Key:

    • In the User Settings, click on API Keys and then on the API Key button.
    • Save the generated key somewhere safe, as you will not be able to see it again when you navigate away from the page.
  2. Use the API Key:

    • Use cURL or any other tool to access the API using the API key and your Endpoint ID:
      • Replace <api_key> with your key.
  3. Use your Endpoint:

    • Replace <endpoint_id> with the ID of the endpoint. (You can find the endpoint ID by clicking on your endpoint; it is written underneath the name of the endpoint at the top and also part of the URLs shown at the bottom of the first box.)

How to find the EndpointID

Health status

curl -H "Authorization: Bearer <api_key>" https://api.runpod.ai/v2/<endpoint_id>/health

Generate an image

You can either create a new job async by using /run or a sync by using /runsync. The example here is using a sync job and waits until the response is delivered.

The API expects a JSON in this form, where workflow is the workflow from ComfyUI, exported as JSON and images is optional.

Please also take a look at the test_input.json to see how the API input should look like.

Example request with cURL

curl -X POST -H "Authorization: Bearer <api_key>" -H "Content-Type: application/json" -d '{"input":{"workflow":{"3":{"inputs":{"seed":1337,"steps":20,"cfg":8,"sampler_name":"euler","scheduler":"normal","denoise":1,"model":["4",0],"positive":["6",0],"negative":["7",0],"latent_image":["5",0]},"class_type":"KSampler"},"4":{"inputs":{"ckpt_name":"sd_xl_base_1.0.safetensors"},"class_type":"CheckpointLoaderSimple"},"5":{"inputs":{"width":512,"height":512,"batch_size":1},"class_type":"EmptyLatentImage"},"6":{"inputs":{"text":"beautiful scenery nature glass bottle landscape, purple galaxy bottle,","clip":["4",1]},"class_type":"CLIPTextEncode"},"7":{"inputs":{"text":"text, watermark","clip":["4",1]},"class_type":"CLIPTextEncode"},"8":{"inputs":{"samples":["3",0],"vae":["4",2]},"class_type":"VAEDecode"},"9":{"inputs":{"filename_prefix":"ComfyUI","images":["8",0]},"class_type":"SaveImage"}}}}' https://api.runpod.ai/v2/<endpoint_id>/runsync

Example response with AWS S3 bucket configuration

{
  "delayTime": 2188,
  "executionTime": 2297,
  "id": "sync-c0cd1eb2-068f-4ecf-a99a-55770fc77391-e1",
  "output": {
    "message": "https://bucket.s3.region.amazonaws.com/10-23/sync-c0cd1eb2-068f-4ecf-a99a-55770fc77391-e1/c67ad621.png",
    "status": "success"
  },
  "status": "COMPLETED"
}

Example response as base64-encoded image

{
  "delayTime": 2188,
  "executionTime": 2297,
  "id": "sync-c0cd1eb2-068f-4ecf-a99a-55770fc77391-e1",
  "output": { "message": "base64encodedimage", "status": "success" },
  "status": "COMPLETED"
}

How to get the workflow from ComfyUI?

  • Open ComfyUI in the browser
  • Open the Settings (gear icon in the top right of the menu)
  • In the dialog that appears configure:
    • Enable Dev mode Options: enable
    • Close the Settings
  • In the menu, click on the Save (API Format) button, which will download a file named workflow_api.json

You can now take the content of this file and put it into your workflow when interacting with the API.

Bring Your Own Models and Nodes

Network Volume

Using a Network Volume allows you to store and access custom models:

  1. Create a Network Volume:

  2. Populate the Volume:

    • Create a temporary GPU instance:
      • Navigate to Manage > Storage, click Deploy under the volume, and deploy any GPU or CPU instance.
      • Navigate to Manage > Pods. Under the new pod, click Connect to open a shell (either via Jupyter notebook or SSH).
    • Populate the volume with your models:
      cd /workspace
      for i in checkpoints clip clip_vision configs controlnet embeddings loras upscale_models vae; do mkdir -p models/$i; done
      wget -O models/checkpoints/sd_xl_turbo_1.0_fp16.safetensors https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0_fp16.safetensors
  3. Delete the Temporary GPU Instance:

  4. Configure Your Endpoint:

    • Use the Network Volume in your endpoint configuration:
      • Either create a new endpoint or update an existing one.
      • In the endpoint configuration, under Advanced > Select Network Volume, select your Network Volume.

Note: The folders in the Network Volume are automatically available to ComfyUI when the network volume is configured and attached.

Custom Docker Image

If you prefer to include your models directly in the Docker image, follow these steps:

  1. Fork the Repository:

    • Fork this repository to your own GitHub account.
  2. Add Your Models in the Dockerfile:

    • Edit the Dockerfile to include your models:
      RUN wget -O models/checkpoints/sd_xl_base_1.0.safetensors https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
    • You can also add custom nodes:
      RUN git clone https://github.com/<username>/<custom-node-repo>.git custom_nodes/<custom-node-repo>
  3. Build Your Docker Image:

    • Build the image locally:
      docker build -t <your_dockerhub_username>/runpod-worker-comfy:dev --platform linux/amd64 .
    • Optionally, skip downloading the default models to reduce the image size:
      docker build --build-arg SKIP_DEFAULT_MODELS=1 -t <your_dockerhub_username>/runpod-worker-comfy:dev --platform linux/amd64 .
    • Ensure to specify --platform linux/amd64 to avoid errors on RunPod, see issue #13.

Local testing

Both tests will use the data from test_input.json, so make your changes in there to test this properly.

Setup

  1. Make sure you have Python >= 3.10
  2. Create a virtual environment:
    python -m venv venv
  3. Activate the virtual environment:
    • Windows:
      .\venv\Scripts\activate
    • Mac / Linux:
      source ./venv/bin/activate
  4. Install the dependencies:
    pip install -r requirements.txt

Setup for Windows

  1. Install WSL2 and a Linux distro (like Ubuntu) following this guide. You can skip the "Install and use a GUI package" part.
  2. After installing Ubuntu, open the terminal and log in:
    wsl -d Ubuntu
  3. Update the packages:
    sudo apt update
  4. Install Docker in Ubuntu:
  5. Enable GPU acceleration on Ubuntu on WSL2: Follow this guide.
    • If you already have your GPU driver installed on Windows, you can skip the "Install the appropriate Windows vGPU driver for WSL" step.
  6. Add your user to the docker group to use Docker without sudo:
    sudo usermod -aG docker $USER

Once these steps are completed, switch to Ubuntu in the terminal and run the Docker image locally on your Windows computer via WSL:

wsl -d Ubuntu

Testing the RunPod handler

  • Run all tests: python -m unittest discover
  • If you want to run a specific test: python -m unittest tests.test_rp_handler.TestRunpodWorkerComfy.test_bucket_endpoint_not_configured

You can also start the handler itself to have the local server running: python src/rp_handler.py To get this to work you will also need to start "ComfyUI", otherwise the handler will not work.

Local API

For enhanced local development, you can start an API server that simulates the RunPod worker environment. This feature is particularly useful for debugging and testing your integrations locally.

Set the SERVE_API_LOCALLY environment variable to true to activate the local API server when running your Docker container. This is already the default value in the docker-compose.yml, so you can get it running by executing:

docker-compose up

Access the local Worker API

  • With the local API server running, it's accessible at: localhost:8000
  • When you open this in your browser, you can also see the API documentation and can interact with the API directly

Access local ComfyUI

  • With the local API server running, you can access ComfyUI at: localhost:8188

Automatically deploy to Docker hub with GitHub Actions

The repo contains two workflows that publish the image to Docker hub using GitHub Actions:

  • dev.yml: Creates the image and pushes it to Docker hub with the dev tag on every push to the main branch
  • release.yml: Creates the image and pushes it to Docker hub with the latest and the release tag. It will only be triggered when you create a release on GitHub

If you want to use this, you should add these secrets to your repository:

Configuration Variable Description Example Value
DOCKERHUB_USERNAME Your Docker Hub username. your-username
DOCKERHUB_TOKEN Your Docker Hub token for authentication. your-token
DOCKERHUB_REPO The repository on Docker Hub where the image will be pushed. timpietruskyblibla
DOCKERHUB_IMG The name of the image to be pushed to Docker Hub. runpod-worker-comfy

Acknowledgments

About

ComfyUI as a serverless API on RunPod

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 78.6%
  • Dockerfile 18.7%
  • Shell 2.7%