comfyuiRun local ComfyUI workflows via the HTTP API. Use when the user asks to run ComfyUI, execute a workflow by file path/name, or supply raw API-format JSON; supports the default workflow bundled in assets.
Install via ClawdBot CLI:
clawdbot install kelvincai522/comfyuiRequires:
Run ComfyUI workflows on the local server (default 127.0.0.1:8188) using API-format JSON and return output images.
The run script only takes --workflow . You must inspect and edit the workflow JSON before running, using your best knowledge of the ComfyUI API format. Do not assume fixed node IDs, class_type names, or _meta.title values โ the user may have updated the default workflow or supplied a custom one.
For every run (including the default workflow):
skills/comfyui/assets/default-workflow.json, or the path/file the user gave).PrimitiveStringMultiline, CLIPTextEncode (positive text), or any node with _meta.title or class_type suggesting "Prompt" / "positive" / "text". Update the corresponding input (e.g. inputs.value, or the text input to the encoder) to the image prompt you derived from the user (subject, style, lighting, quality). If the user didnโt ask for a custom image, you can leave the existing prompt or tweak only if needed.StringConcatenate, or a second string input that acts as style. Set them if the user asked for a specific style or to clear a default prefix.KSampler, BasicGuider, or any node with a seed input) and set seed to a new random integer so each run can differ.skills/comfyui/assets/tmp-workflow.json). Use ~/ComfyUI/venv/bin/python for any inline Python; do not use bare python.comfyui_run.py --workflow .If the workflow structure is unclear or you canโt find prompt/sampler nodes, run the file as-is and only change what you can reliably identify. Same approach for arbitrary user-supplied JSON: inspect first, edit at your best knowledge, then run.
~/ComfyUI/venv/bin/python skills/comfyui/scripts/comfyui_run.py \
--workflow <path-to-workflow.json>
The script only queues the workflow and polls until done. It prints JSON with prompt_id and output images. All prompt/style/seed changes are done by you in the JSON beforehand.
If the run script fails with a connection error (e.g. connection refused or timeout to 127.0.0.1:8188), ComfyUI may not be installed or not running.
Check: Does ~/ComfyUI exist and contain main.py?
git clone https://github.com/comfyanonymous/ComfyUI.git ~/ComfyUI
cd ~/ComfyUI
python3 -m venv venv
~/ComfyUI/venv/bin/pip install -r requirements.txt
Then start the server (see below). Tell the user they may need to install model weights into ~/ComfyUI/models/ depending on the workflow.
~/ComfyUI/venv/bin/python ~/ComfyUI/main.py --listen 127.0.0.1
Run in the background or in a separate terminal so it keeps running. Then retry the workflow run.
Use ~ (or the userโs home) for paths so it works on their machine.
When the user pastes or sends a list of model weight URLs (one per line, or comma-separated), download those files into the ComfyUI installation so the workflow can use them later.
#).~/ComfyUI). The script uses pget for parallel downloads when available; if pget is not in PATH, it installs it to ~/.local/bin automatically (no sudo). If pget cannot be installed (e.g. unsupported OS/arch), it falls back to a built-in download. Use the ComfyUI venv Python so the script runs correctly:
~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI
Pass URLs as arguments, or pipe a file/list on stdin:
echo "https://example.com/model.safetensors" | ~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI
Or save the userโs list to a temp file and run:
~/ComfyUI/venv/bin/python skills/comfyui/scripts/download_weights.py --base ~/ComfyUI < /tmp/weight_urls.txt
To force the built-in download (no pget): add --no-pget.
vae, clip, loras, checkpoints, text_encoders, controlnet, upscale_models). The user can optionally specify a subfolder per line as url subfolder (e.g. https://.../model.safetensors vae). You can also pass a default with --subfolder loras so all URLs in that run go to models/loras/.--overwrite to replace.~/ComfyUI/models// . Tell the user where each file was saved and that they can run the workflow once the ComfyUI server is (re)started if needed.Supported subfolders (under ComfyUI/models/): checkpoints, clip, clip_vision, controlnet, diffusion_models, embeddings, loras, text_encoders, unet, vae, vae_approx, upscale_models, and others. Use --subfolder when the auto-inference is wrong.
Outputs are saved under ComfyUI/output/. Use the images list from the script output to locate the files (filename + subfolder).
After a successful ComfyUI run, you must deliver the generated image(s) to the user. Do not reply with only the filename in text or with NO_REPLY.
images (each has filename, subfolder, type).ComfyUI/output/ + subfolder + filename (e.g. ComfyUI/output/z-image_00007_.png).path so the user receives the file). Include a short caption if helpful (e.g. "Here you go." or "Tokyo street scene.").Every successful run must result in the user receiving the image. Never leave them with only a filename or no delivery.
comfyui_run.py: Queue a workflow, poll until completion, print prompt_id and images. No args โ you edit the JSON before running.download_weights.py: Download model weight URLs into ~/ComfyUI/models// . Uses pget when available (installs to ~/.local/bin if missing); fallback to built-in download. Input: URLs as args or one per line on stdin. Options: --base, --subfolder, --overwrite, --no-pget. Infers subfolder from URL/filename when not given.default-workflow.json: Default workflow. Copy and edit (prompt, style, seed) then run with the edited path; or run as-is for a generic run.Generated Feb 24, 2026
Marketing agencies can use ComfyUI to generate custom images for campaigns by editing workflow prompts to match client specifications like subject, style, and lighting. This enables rapid prototyping of visual assets without manual design work, reducing time and costs for digital content creation.
E-commerce businesses can integrate ComfyUI to create realistic product images by adjusting workflows to depict items in various scenes or styles. This allows for generating diverse visual catalogs and mockups without physical photography, enhancing online listings and customer engagement.
Educational institutions can leverage ComfyUI to produce custom illustrations and diagrams for textbooks or online courses by modifying prompts to align with learning objectives. This supports visual learning aids tailored to specific subjects, improving educational resource quality.
Game developers can use ComfyUI to generate textures, characters, or environments by editing workflows to match artistic styles and themes. This accelerates asset production for indie or small studios, enabling iterative design and reducing reliance on external artists.
Architecture firms can employ ComfyUI to render building designs and interior scenes by customizing prompts for materials, lighting, and perspectives. This facilitates client presentations and concept development with high-quality visualizations without extensive 3D modeling expertise.
Offer a subscription-based service where users access a hosted ComfyUI instance with pre-configured workflows and model libraries. Revenue is generated through tiered plans based on usage limits, priority support, and advanced features like custom workflow storage.
Provide a free version of the skill with basic image generation and limited workflows, while charging for premium features such as high-resolution outputs, batch processing, and integration with other design software. This attracts a broad user base and converts power users.
Sell licenses to large organizations for on-premise deployment of ComfyUI, including custom workflow development, dedicated support, and training services. Revenue comes from one-time license fees or annual maintenance contracts tailored to enterprise needs.
๐ฌ Integration Tip
Ensure ComfyUI server is running locally on port 8188 and edit workflow JSON files carefully to match user prompts before execution for reliable results.
Generate/edit images with Nano Banana Pro (Gemini 3 Pro Image). Use for image create/modify requests incl. edits. Supports text-to-image + image-to-image; 1K/2K/4K; use --input-image.
Capture frames or clips from RTSP/ONVIF cameras.
Batch-generate images via OpenAI Images API. Random prompt sampler + `index.html` gallery.
Generate images using the internal Google Antigravity API (Gemini 3 Pro Image). High quality, native generation without browser automation.
ไฝฟ็จๅ ็ฝฎ image_generate.py ่ๆฌ็ๆๅพ็, ๅๅคๆธ ๆฐๅ ทไฝ็ `prompt`ใ
AI image generation powered by CellCog. Create images, edit photos, consistent characters, product photography, reference-based images, sets of images, style transfer. Professional image creation with AI.