agent-orchestrator-molter-102Multi-agent orchestration with 5 proven patterns - Work Crew, Supervisor, Pipeline, Council, and Auto-Routing
Install via ClawdBot CLI:
clawdbot install variable190/agent-orchestrator-molter-102Multi-agent orchestration for OpenClaw. Implements 5 proven patterns for coordinating multiple AI agents: Work Crew, Supervisor, Pipeline, Expert Council, and Auto-Routing.
USE WHEN:
DON'T USE WHEN:
Outputs:
| Pattern | Use When | Avoid When |
|---------|----------|------------|
| crew | Same task from multiple angles, verification, research breadth | Results cannot be easily compared/merged |
| supervise | Dynamic decomposition needed, complex planning | Fixed workflow, simple delegation |
| pipeline | Well-defined sequential stages, content creation | Path needs runtime adaptation |
| council | Cross-domain expertise, risk assessment, policy review | Single-domain task, need fast consensus |
| route | Mixed workload types, automatic classification | Task type is already known |
The route command analyzes tasks and automatically classifies them by type, then routes to the appropriate specialist:
# Basic routing
claw agent-orchestrator route --task "Write Python parser"
# With custom specialist pool
claw agent-orchestrator route \
--task "Analyze data and create report" \
--specialists "analyst,data,writer"
# Force specific specialist
claw agent-orchestrator route \
--task "Something complex" \
--force coder
Available specialists: coder, researcher, writer, analyst, planner, reviewer, creative, data, devops, support
# Parallel research with consensus
claw agent-orchestrator crew \
--task "Research Bitcoin Lightning 2026 adoption" \
--agents 4 \
--perspectives technical,business,security,competitors \
--converge consensus
# Best-of redundancy for critical analysis
claw agent-orchestrator crew \
--task "Audit this smart contract for vulnerabilities" \
--agents 3 \
--converge best-of
# Supervisor-managed code review
claw agent-orchestrator supervise \
--task "Refactor authentication module" \
--workers coder,reviewer,tester \
--strategy adaptive
# Staged content pipeline
claw agent-orchestrator pipeline \
--stages research,draft,review,finalize \
--input "topic: AI agent adoption trends"
# Expert council for decision
claw agent-orchestrator council \
--question "Should we publish this blog post about unreleased features?" \
--experts skeptic,ethicist,strategist \
--converge consensus \
--rounds 2
# Auto-route mixed tasks
claw agent-orchestrator route \
--task "Write Python function to analyze CSV data" \
--specialists coder,researcher,writer,analyst
# Force route to specific specialist
claw agent-orchestrator route \
--task "Debug authentication error" \
--force coder \
--confidence-threshold 0.9
# Route and output as JSON for scripting
claw agent-orchestrator route \
--task $TASK \
--format json \
--specialists "coder,data,analyst"
DON'T: Use crew for simple single-answer questions
# WRONG: Wasteful for simple facts
claw agent-orchestrator crew --task "What is 2+2?" --agents 3
# RIGHT: Use main session directly
What is 2+2?
DON'T: Use supervise when pipeline suffices
# WRONG: Over-engineering fixed workflows
claw agent-orchestrator supervise --task "Draft, edit, publish"
# RIGHT: Use pipeline for fixed sequences
claw agent-orchestrator pipeline --stages draft,edit,publish
DON'T: Route when task type is obvious
# WRONG: Unnecessary classification overhead
claw agent-orchestrator route --task "Write Python code"
# RIGHT: Direct to appropriate specialist
claw agent-orchestrator crew --pattern code --task "Write Python code"
DON'T: Use multi-agent for very small context tasks
# WRONG: Coordination overhead exceeds value
claw agent-orchestrator crew --task "Fix typo" --agents 2
# RIGHT: Single agent or direct edit
edit file.py "typo" "correct"
Multi-agent patterns use approximately 15x more tokens than single-agent interactions. Use only for high-value tasks where quality improvement justifies the cost. See Anthropic research: token usage explains 80% of performance variance in complex tasks.
main.py - CLI entry pointcrew.py - Work Crew pattern implementationsupervise.py - Supervisor pattern (Phase 2)council.py - Expert Council pattern (Phase 2)pipeline.py - Pipeline pattern (Phase 2)route.py - Auto-Routing pattern (Phase 2)utils.py - Shared utilities for session managementGenerated Feb 24, 2026
A marketing team needs to quickly gather insights on emerging trends in a new market segment. Using the crew pattern with multiple agents, they can parallelize research across technical, business, and competitor perspectives to synthesize a comprehensive report, saving time compared to sequential analysis.
A software engineering team is refactoring a critical authentication module. They use the supervise pattern to dynamically delegate tasks among coder, reviewer, and tester agents, ensuring thorough testing and quality control without manual coordination overhead.
A media company produces blog posts on AI trends. They employ the pipeline pattern to sequence stages like research, drafting, review, and finalization, streamlining content production with consistent quality and reducing bottlenecks in editorial workflows.
A financial institution evaluates whether to publish sensitive information in a blog post. Using the council pattern with experts in ethics, strategy, and skepticism, they gain cross-domain insights to make high-stakes decisions with confidence through consensus.
A support team handles mixed inquiries like debugging code or analyzing data. The auto-routing pattern automatically classifies tasks and directs them to appropriate specialists such as coder or analyst, improving response efficiency and accuracy.
Offer this skill as part of a subscription-based platform for businesses needing automated multi-agent workflows. Revenue comes from tiered pricing based on usage volume, with premium features like advanced routing and custom specialist pools.
Provide consulting services to help organizations implement and customize the orchestrator for specific use cases, such as research or content pipelines. Revenue is generated through project-based fees and ongoing support contracts.
Deploy a free version with basic patterns like crew and pipeline, then upsell advanced features such as auto-routing with high confidence thresholds or additional specialists. Revenue scales with increased token usage and premium add-ons.
💬 Integration Tip
Start with simple patterns like crew for parallel tasks to minimize overhead, and ensure your OpenClaw setup supports session spawning and history capabilities for smooth integration.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection