agent-team-orchestrationOrchestrate multi-agent teams with defined roles, task lifecycles, handoff protocols, and review workflows. Use when: (1) Setting up a team of 2+ agents with different specializations, (2) Defining task routing and lifecycle (inbox β spec β build β review β done), (3) Creating handoff protocols between agents, (4) Establishing review and quality gates, (5) Managing async communication and artifact sharing between agents.
Install via ClawdBot CLI:
clawdbot install arminnaimi/agent-team-orchestrationProduction playbook for running multi-agent teams with clear roles, structured task flow, and quality gates.
A builder and a reviewer. The simplest useful team.
Orchestrator (you) β Route tasks, track state, report results
Builder agent β Execute work, produce artifacts
1. Create task record (file, DB, or task board)
2. Spawn builder with:
- Task ID and description
- Output path for artifacts
- Handoff instructions (what to produce, where to put it)
3. On completion: review artifacts, mark done, report
Builder produces artifact β Reviewer checks it β Orchestrator ships or returns
That's the core loop. Everything below scales this pattern.
Every agent has one primary role. Overlap causes confusion.
| Role | Purpose | Model guidance |
|------|---------|---------------|
| Orchestrator | Route work, track state, make priority calls | High-reasoning model (handles judgment) |
| Builder | Produce artifacts β code, docs, configs | Can use cost-effective models for mechanical work |
| Reviewer | Verify quality, push back on gaps | High-reasoning model (catches what builders miss) |
| Ops | Cron jobs, standups, health checks, dispatching | Cheapest model that's reliable |
β Read references/team-setup.md when defining a new team or adding agents.
Every task moves through a defined lifecycle:
Inbox β Assigned β In Progress β Review β Done | Failed
Rules:
β Read references/task-lifecycle.md when designing task flows or debugging stuck tasks.
When work passes between agents, the handoff message includes:
Bad handoff: "Done, check the files."
Good handoff: "Built auth module at /shared/artifacts/auth/. Run npm test auth to verify. Known issue: rate limiting not implemented yet. Next: reviewer checks error handling edge cases."
Cross-role reviews prevent quality drift:
Skip the review step and quality degrades within 3-5 tasks. Every time.
β Read references/communication.md when setting up agent communication channels.
β Read references/patterns.md for proven multi-step workflows.
| File | Read when... |
|------|-------------|
| team-setup.md | Defining agents, roles, models, workspaces |
| task-lifecycle.md | Designing task states, transitions, comments |
| communication.md | Setting up async/sync communication, artifact paths |
| patterns.md | Implementing specific workflows (specβbuildβtest, parallel research, escalation) |
Agent produces great work, but you can't find it. Always specify the exact output path in the spawn prompt. Use a shared artifacts directory with predictable structure.
"It's a small change, skip review." Do this three times and you have compounding errors. Every artifact gets at least one set of eyes that didn't produce it.
Silent agents create coordination blind spots. Require comments at: start, blocker, handoff, completion. If an agent goes silent, assume it's stuck.
Assigning browser-based testing to an agent without browser access. Assigning image work to a text-only model. Check capabilities before routing.
The orchestrator routes and tracks β it doesn't build. The moment you start "just quickly doing this one thing," you've lost oversight of the rest of the team.
sessions_spawn directly. This skill is for sustained workflows with multiple handoffs.This skill is for sustained team workflows β recurring collaboration patterns where agents depend on each other's output over multiple tasks.
Generated Mar 1, 2026
A team of AI agents collaborates on coding projects, with a builder agent writing code, a reviewer agent checking for bugs and adherence to specifications, and an orchestrator managing task assignments and deadlines. This ensures efficient development cycles and high-quality outputs in agile environments.
Multiple AI agents work together to produce marketing materials, where one agent drafts content, another reviews for brand consistency and SEO, and an orchestrator coordinates timelines and approvals. This streamlines content production for blogs, social media, and advertisements.
AI agents handle customer inquiries, with initial responders triaging issues, specialized agents resolving complex technical problems, and an orchestrator monitoring response times and satisfaction metrics. This improves support efficiency in e-commerce or SaaS industries.
A team of AI agents conducts market research, with one agent gathering data, another analyzing trends, and an orchestrator synthesizing reports and prioritizing insights. This supports data-driven decision-making in finance or consulting sectors.
AI agents manage infrastructure tasks, such as code deployment, monitoring, and incident response, with builders executing scripts, reviewers validating configurations, and an orchestrator ensuring system reliability and compliance. This enhances operational efficiency in IT services.
Offer the orchestration skill as a cloud-based service with tiered pricing based on team size and features, targeting businesses needing scalable AI collaboration tools. Revenue is generated through monthly or annual subscriptions with add-ons for premium support.
Provide customized setup and integration services for enterprises adopting multi-agent workflows, including training and ongoing optimization. Revenue comes from project-based fees and retainer agreements for maintenance and updates.
Deploy a free basic version of the skill with limited agents and tasks, encouraging adoption by small teams, and monetize through paid upgrades for advanced features like analytics and priority support. Revenue is driven by conversion to premium plans.
π¬ Integration Tip
Start with a minimal 2-agent team to test workflows, ensure clear artifact paths and review steps to avoid common pitfalls like quality drift.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection