nodetoolVisual AI workflow builder - ComfyUI meets n8n for LLM agents, RAG pipelines, and multimodal data flows. Local-first, open source (AGPL-3.0).
Install via ClawdBot CLI:
clawdbot install georgi/nodetoolVisual AI workflow builder combining ComfyUI's node-based flexibility with n8n's automation power. Build LLM agents, RAG pipelines, and multimodal data flows on your local machine.
# See system info
nodetool info
# List workflows
nodetool workflows list
# Run a workflow interactively
nodetool run <workflow_id>
# Start of chat interface
nodetool chat
# Start of web server
nodetool serve
Quick one-line installation:
curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash
With custom directory:
curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash --prefix ~/.nodetool
Non-interactive mode (automatic, no prompts):
Both scripts support silent installation:
# Linux/macOS - use -y
curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash -y
# Windows - use -Yes
irm https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.ps1 | iex; .\install.ps1 -Yes
What happens with non-interactive mode:
Quick one-line installation:
irm https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.ps1 | iex
With custom directory:
.\install.ps1 -Prefix "C:\nodetool"
Non-interactive mode:
.\install.ps1 -Yes
Manage and execute NodeTool workflows:
# List all workflows (user + example)
nodetool workflows list
# Get details for a specific workflow
nodetool workflows get <workflow_id>
# Run workflow by ID
nodetool run <workflow_id>
# Run workflow from file
nodetool run workflow.json
# Run with JSONL output (for automation)
nodetool run <workflow_id> --jsonl
Execute workflows in different modes:
# Interactive mode (default) - pretty output
nodetool run workflow_abc123
# JSONL mode - streaming JSON for subprocess use
nodetool run workflow_abc123 --jsonl
# Stdin mode - pipe RunJobRequest JSON
echo '{"workflow_id":"abc","user_id":"1","auth_token":"token","params":{}}' | nodetool run --stdin --jsonl
# With custom user ID
nodetool run workflow_abc123 --user-id "custom_user_id"
# With auth token
nodetool run workflow_abc123 --auth-token "my_auth_token"
Manage workflow assets (nodes, models, files):
# List all assets
nodetool assets list
# Get asset details
nodetool assets get <asset_id>
Manage NodeTool packages (export workflows, generate docs):
# List packages
nodetool package list
# Generate documentation
nodetool package docs
# Generate node documentation
nodetool package node-docs
# Generate workflow documentation (Jekyll)
nodetool package workflow-docs
# Scan directory for nodes and create package
nodetool package scan
# Initialize new package project
nodetool package init
Manage background job executions:
# List jobs for a user
nodetool jobs list
# Get job details
nodetool jobs get <job_id>
# Get job logs
nodetool jobs logs <job_id>
# Start background job for workflow
nodetool jobs start <workflow_id>
Deploy NodeTool to cloud platforms (RunPod, GCP, Docker):
# Initialize deployment.yaml
nodetool deploy init
# List deployments
nodetool deploy list
# Add new deployment
nodetool deploy add
# Apply deployment configuration
nodetool deploy apply
# Check deployment status
nodetool deploy status <deployment_name>
# View deployment logs
nodetool deploy logs <deployment_name>
# Destroy deployment
nodetool deploy destroy <deployment_name>
# Manage collections on deployed instance
nodetool deploy collections
# Manage database on deployed instance
nodetool deploy database
# Manage workflows on deployed instance
nodetool deploy workflows
# See what changes will be made
nodetool deploy plan
Discover and manage AI models (HuggingFace, Ollama):
# List cached HuggingFace models by type
nodetool model list-hf <hf_type>
# List all HuggingFace cache entries
nodetool model list-hf-all
# List supported HF types
nodetool model hf-types
# Inspect HuggingFace cache
nodetool model hf-cache
# Scan cache for info
nodetool admin scan-cache
Maintain model caches and clean up:
# Calculate total cache size
nodetool admin cache-size
# Delete HuggingFace model from cache
nodetool admin delete-hf <model_name>
# Download HuggingFace models with progress
nodetool admin download-hf <model_name>
# Download Ollama models
nodetool admin download-ollama <model_name>
Interactive chat and web interface:
# Start CLI chat
nodetool chat
# Start chat server (WebSocket + SSE)
nodetool chat-server
# Start FastAPI backend server
nodetool serve --host 0.0.0.0 --port 8000
# With static assets folder
nodetool serve --static-folder ./static --apps-folder ./apps
# Development mode with auto-reload
nodetool serve --reload
# Production mode
nodetool serve --production
Start reverse proxy with HTTPS:
# Start proxy server
nodetool proxy
# Check proxy status
nodetool proxy-status
# Validate proxy config
nodetool proxy-validate-config
# Run proxy daemon with ACME HTTP + HTTPS
nodetool proxy-daemon
# View settings and secrets
nodetool settings show
# Generate custom HTML app for workflow
nodetool vibecoding
# Run workflow and export as Python DSL
nodetool dsl-export
# Export workflow as Gradio app
nodetool gradio-export
# Regenerate DSL
nodetool codegen
# Manage database migrations
nodetool migrations
# Synchronize database with remote
nodetool sync
Run a NodeTool workflow and get structured output:
# Run workflow interactively
nodetool run my_workflow_id
# Run and stream JSONL output
nodetool run my_workflow_id --jsonl | jq -r '.[] | "\(.status) | \(.output)"'
Generate documentation for a custom package:
# Scan for nodes and create package
nodetool package scan
# Generate complete documentation
nodetool package docs
Deploy a NodeTool instance to the cloud:
# Initialize deployment config
nodetool deploy init
# Add RunPod deployment
nodetool deploy add
# Deploy and start
nodetool deploy apply
Check and manage cached AI models:
# List all available models
nodetool model list-hf-all
# Inspect cache
nodetool model hf-cache
Quick one-line installation:
curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash
With custom directory:
curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash --prefix ~/.nodetool
Non-interactive mode (automatic, no prompts):
Both scripts support silent installation:
# Linux/macOS - use -y
curl -fsSL https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.sh | bash -y
# Windows - use -Yes
irm https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.ps1 | iex; .\install.ps1 -Yes
What happens with non-interactive mode:
Quick one-line installation:
irm https://raw.githubusercontent.com/nodetool-ai/nodetool/refs/heads/main/install.ps1 | iex
With custom directory:
.\install.ps1 -Prefix "C:\nodetool"
Non-interactive mode:
.\install.ps1 -Yes
The installer sets up:
~/.nodetool/envnodetool-core, nodetool-base from NodeTool registrynodetool CLI available from any terminalAfter installation, these variables are automatically configured:
# Conda environment
export MAMBA_ROOT_PREFIX="$HOME/.nodetool/micromamba"
export PATH="$HOME/.nodetool/env/bin:$HOME/.nodetool/env/Library/bin:$PATH"
# Model cache directories
export HF_HOME="$HOME/.nodetool/cache/huggingface"
export OLLAMA_MODELS="$HOME/.nodetool/cache/ollama"
Check NodeTool environment and installed packages:
nodetool info
Output shows:
Generated Mar 1, 2026
Marketing teams can build visual workflows to generate blog posts, social media content, and ad copy by chaining LLM nodes with image generation and editing tools. This enables rapid prototyping of multimodal campaigns directly on local machines, ensuring data privacy and reducing reliance on cloud services.
Law firms can create custom retrieval-augmented generation workflows to query internal case databases and legal documents securely. By running pipelines locally, they maintain confidentiality while automating document summarization and precedent analysis, improving research efficiency.
Educators and students in computer science can use the node-based interface to visually construct and test LLM agents and data flows. This hands-on approach helps teach AI concepts without coding, supporting interactive lessons on automation and multimodal AI applications.
Support teams can design workflows that integrate LLMs with ticketing systems and knowledge bases to automate responses and triage inquiries. Running locally ensures sensitive customer data isn't exposed to third-party APIs, while streamlining support operations.
Researchers can build visual pipelines to combine text reports with medical imaging data, using LLM nodes for analysis and summarization. This facilitates rapid experimentation with AI models on-premises, adhering to strict data compliance regulations in healthcare.
Offer paid consulting, customization, and premium support services to businesses using the AGPL-3.0 licensed tool. This includes tailored workflow development, integration assistance, and priority updates, generating revenue from enterprises needing reliable local AI solutions.
Create a platform where users can buy and sell node-based workflow templates for specific tasks like content generation or data analysis. Revenue comes from transaction fees or subscriptions, leveraging the community to expand the tool's utility and adoption.
Develop and sell online courses, workshops, and certification for mastering NodeTool in AI workflow design. Target professionals and teams looking to upskill in visual AI automation, with revenue from course fees and corporate training packages.
💬 Integration Tip
Leverage the --jsonl flag for automation by piping workflow outputs into other tools, and use non-interactive installation for seamless CI/CD pipeline integration.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection