ComfyUI Text-to-Image/Video Workload¶
This Helm Chart deploys a ComfyUI web app for text-to-image/video generation. ComfyUI is a powerful node-based interface for stable diffusion that provides advanced workflows for AI image and video generation.
Features¶
- Pre-configured ComfyUI Environment: Automatically installs and configures ComfyUI with ROCm support
- Model Management: Support for downloading models from HuggingFace or MinIO/S3 storage
- ComfyUI Manager: Includes ComfyUI Manager for easy extension management
Configuration Parameters¶
You can configure the following parameters in the values.yaml
file or override them via the command line:
Parameter | Description | Default |
---|---|---|
image |
Container image repository and tag | rocm/dev-ubuntu-22.04:6.2.4 |
gpus |
Number of GPUs to allocate | 1 |
model |
HuggingFace model path (e.g., Comfy-Org/flux1-dev ) |
Not set |
tag |
Specific model binaries (*tag*.safetensors) to download (optional) | Clone the repo when not set |
storage.ephemeral.quantity |
Ephemeral storage size | 200Gi |
kaiwo.enabled |
Enable Kaiwo operator management | false |
Environment Variables¶
The following environment variables are configured for MinIO/S3 integration:
Variable | Description | Default |
---|---|---|
BUCKET_STORAGE_HOST |
MinIO/S3 endpoint URL | http://minio.minio-tenant-default.svc.cluster.local:80 |
BUCKET_STORAGE_ACCESS_KEY |
MinIO/S3 access key (from secret) | From minio-credentials secret |
BUCKET_STORAGE_SECRET_KEY |
MinIO/S3 secret key (from secret) | From minio-credentials secret |
PIP_DEPS |
Additional Python packages to install via pip (space or newline separated URLs/packages) | ROCm-compatible torchaudio wheel |
COMFYUI_PATH |
ComfyUI installation path | /workload/ComfyUI |
MODEL_BIN_URL |
Direct URL to download an additional model checkpoint (optional) | Not set |
Model Configuration¶
Using HuggingFace Models¶
Configure models from HuggingFace by setting the model
parameter:
Using S3/MinIO Models¶
For models stored in S3/MinIO, use the s3:// prefix:
Using Direct Download URLs¶
For direct model downloads, use the MODEL_BIN_URL
environment variable:
env_vars:
MODEL_BIN_URL: "https://huggingface.co/Comfy-Org/Lumina_Image_2.0_Repackaged/resolve/main/all_in_one/lumina_2.safetensors"
Pre-configured Model Overrides¶
The workload includes several pre-configured model overrides in the overrides/models/
directory:
Deploying the Workload¶
Basic Deployment¶
To deploy the service with default settings, run the following command within the helm
folder:
Deployment with Model Override¶
To deploy with a specific model configuration:
Custom Deployment¶
To deploy with custom parameters:
Accessing the Workload¶
Verify Deployment¶
Check the deployment and service status:
Port Forwarding¶
To access the service locally on port 8188
, forward the port of the service/deployment:
Then open a web-browser and navigate to http://localhost:8188 to access ComfyUI.
Accessing the Workload via URL¶
To access the workload through a URL, you can enable either an Ingress or HTTPRoute in the values.yaml
file. The following parameters are available:
Parameter | Description | Default |
---|---|---|
ingress.enabled |
Enable Ingress resource | false |
httproute.enabled |
Enable HTTPRoute resource | false |
See the corresponding template files in the templates/
directory. For more details on configuring Ingress or HTTPRoute, refer to the Ingress documentation and HTTPRoute documentation, or documentation of the particular gateway implementation you may use, like KGateway. Check with your cluster administrator for the correct configuration for your environment.
Health Checks and Monitoring¶
The workload includes comprehensive health monitoring:
- Startup Probe: Allows up to 10 minutes for ComfyUI to start (checks
/queue
endpoint) - Liveness Probe: Monitors if ComfyUI is running properly
- Readiness Probe: Ensures ComfyUI is ready to serve requests