dispatching-parallel-agentsUse when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
Install via ClawdBot CLI:
clawdbot install zlc000190/dispatching-parallel-agentsWhen you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
Core principle: Dispatch one agent per independent problem domain. Let them work concurrently.
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
Use when:
Don't use when:
Group failures by what's broken:
Each domain is independent - fixing tool approval doesn't affect abort tests.
Each agent gets:
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently
When agents return:
Good agent prompts are:
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
❌ Too broad: "Fix all the tests" - agent gets lost
✅ Specific: "Fix agent-tool-abort.test.ts" - focused scope
❌ No context: "Fix the race condition" - agent doesn't know where
✅ Context: Paste the error messages and test names
❌ No constraints: Agent might refactor everything
✅ Constraints: "Do NOT change production code" or "Fix tests only"
❌ Vague output: "Fix it" - you don't know what changed
✅ Specific: "Return summary of root cause and changes"
Related failures: Fixing one might fix others - investigate together first
Need full context: Understanding requires seeing entire system
Exploratory debugging: You don't know what's broken yet
Shared state: Agents would interfere (editing same files, using same resources)
Scenario: 6 test failures across 3 files after major refactoring
Failures:
Decision: Independent domains - abort logic separate from batch completion separate from race conditions
Dispatch:
Agent 1 → Fix agent-tool-abort.test.ts
Agent 2 → Fix batch-completion-behavior.test.ts
Agent 3 → Fix tool-approval-race-conditions.test.ts
Results:
Integration: All fixes independent, no conflicts, full suite green
Time saved: 3 problems solved in parallel vs sequentially
After agents return:
From debugging session (2025-10-03):
Generated Mar 1, 2026
When multiple test files fail with unrelated root causes, such as timing issues in one file and logic bugs in another, this skill allows dispatching separate AI agents to fix each file concurrently. This reduces debugging time by parallelizing investigations, ensuring each agent focuses on a specific domain without interference.
In a support system with multiple unrelated customer issues, like billing errors and technical glitches, this skill enables assigning distinct AI agents to handle each ticket type simultaneously. Agents work independently to resolve problems, speeding up response times and improving efficiency without shared state conflicts.
For moderating user-generated content on different platforms, such as social media posts and forum comments, this skill dispatches parallel agents to review each platform's content independently. Each agent analyzes specific rule violations concurrently, enhancing moderation speed and accuracy without cross-platform dependencies.
When analyzing multiple financial reports from different departments, like sales and expenses, this skill uses parallel agents to process each report separately. Agents extract key metrics and identify anomalies independently, allowing for faster compilation of insights without sequential bottlenecks in data processing.
In a medical setting with multiple patient cases requiring different diagnostic tests, this skill assigns AI agents to analyze each test result concurrently. Agents focus on specific conditions, such as imaging scans or lab reports, enabling quicker diagnosis by handling independent tasks in parallel without shared patient data conflicts.
Offer a cloud-based platform where developers subscribe to use AI agents for parallel debugging of test failures. Revenue comes from monthly fees based on usage tiers, providing cost savings through reduced investigation time and improved software quality for clients.
Provide consulting to businesses looking to optimize workflows by implementing parallel agent dispatch. Revenue is generated through project-based fees for setup and training, helping clients speed up tasks like customer support or content moderation with tailored AI solutions.
License the skill's API to other software companies for embedding parallel agent capabilities into their products. Revenue comes from licensing fees per API call or annual contracts, enabling partners to enhance their offerings with efficient concurrent task handling in various industries.
💬 Integration Tip
Start by identifying clearly independent tasks, such as separate test files or unrelated customer issues, to avoid conflicts and ensure smooth parallel execution.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection