parallel-agentsSpawns real AI-powered OpenClaw sub-sessions to run multiple specialized agents concurrently for content, dev, QA, docs, and autonomous workflows.
Install via ClawdBot CLI:
clawdbot install jdalbright/parallel-agentsš Execute tasks with ACTUAL AI-powered parallel agents using OpenClaw's sessions_spawn.
ā ļø HONEST STATUS: This skill has been rewritten to use REAL AI via sessions_spawn.
Previously it simulated agents with templates. Now it ACTUALLY spawns AI sub-sessions.
The orchestrator MUST be called from within an OpenClaw agent session, NOT as a standalone script.
Why? The tools module (which provides sessions_spawn) is only available in the agent's runtime context, not in subprocess/exec calls.
ā CORRECT: Call sessions_spawn directly from agent code (see USAGE-GUIDE.md)
ā INCORRECT: Run orchestrator as standalone Python script via exec/subprocess
š SEE: USAGE-GUIDE.md for tested working examples and patterns
This skill provides 4 levels of agent automation:
| Level | Feature | What It Does |
|-------|---------|--------------|
| 1 | Task Agents (16 types) | Specialized agents for content, dev, QA, docs |
| 2 | Meta Agents (4 types) | Agents that create, review, refine, and orchestrate other agents |
| 3 | Iterative Refinement | Automatic quality improvement loop (Creator ā Reviewer ā Refiner) |
| 4 | Agent Orchestrator | Fully autonomous workflow management - just ask and it handles everything |
Proven Capabilities:
This skill creates real AI sub-sessions using OpenClaw's sessions_spawn tool. Each "agent" is:
Previous version: Subprocess workers with templates ā
Current version: Real spawned AI sessions ā
From within an OpenClaw agent (like Scout):
# Spawn multiple agents in parallel using sessions_spawn tool directly
from tools import sessions_spawn
# Agent 1: Research task
result1 = sessions_spawn(
task="Research and provide: Top 3 gay-friendly bars in Savannah. Return as JSON.",
runTimeoutSeconds=90,
cleanup="delete"
)
# Agent 2: Different research task
result2 = sessions_spawn(
task="Research and provide: Best restaurants for birthday dinner. Return as JSON.",
runTimeoutSeconds=90,
cleanup="delete"
)
# Agent 3: Another parallel task
result3 = sessions_spawn(
task="Research and provide: Top photo spots in Savannah. Return as JSON.",
runTimeoutSeconds=90,
cleanup="delete"
)
# All 3 agents now running in parallel!
# Check results with sessions_list() and sessions_history()
# This WON'T work - tools module not available in subprocess
python3 ~/.openclaw/skills/parallel-agents/ai_orchestrator.py
from ai_orchestrator import RealAIParallelOrchestrator, AgentTask
# Create orchestrator
orch = RealAIParallelOrchestrator(max_concurrent=5)
# Define tasks
tasks = [
AgentTask(
agent_type='content_writer_funny',
task_description='Write a caption about gym life',
input_data={'tone': 'motivational'}
),
AgentTask(
agent_type='content_writer_creative',
task_description='Write a caption about gym life',
input_data={'tone': 'inspirational'}
),
]
# Execute in parallel (ACTUALLY spawns AI sessions)
results = orch.run_parallel(tasks)
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Main Session ā
ā (Your OpenClaw Instance) ā
ā š§ Host AI ā
āāāāāāāāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā sessions_spawn (REAL)
ā
āāāāāāāāāāāāāāā¼āāāāāāāāāāāāāā¬āāāāāāāāāāāāāā
ā ā ā ā
āāāāāā¼āāāāā āāāāāā¼āāāāā āāāāāā¼āāāāā āāāāāā¼āāāāā
ā Agent 1 ā ā Agent 2 ā ā Agent 3 ā ā Agent N ā
ā š ā ā š» ā ā š ā ā šØ ā
ā REAL AI ā ā REAL AI ā ā REAL AI ā ā REAL AI ā
ā Session ā ā Session ā ā Session ā ā Session ā
āāāāāāāāāāā āāāāāāāāāāā āāāāāāāāāāā āāāāāāāāāāā
Each agent is spawned with:
from tools import sessions_spawn
result = sessions_spawn(
task=agent_prompt, # Full task description
agent_id=f"agent_{type}_{id}", # Unique identifier
model="kimi-coding/k2p5", # AI model
runTimeoutSeconds=120, # Max execution time
cleanup="delete" # Auto-cleanup
)
| Agent Type | Purpose | System Prompt |
|------------|---------|---------------|
| content_writer_creative | Imaginative, artistic | Rich metaphors, emotional resonance |
| content_writer_funny | Humorous, witty | Jokes, wordplay, relatable humor |
| content_writer_educational | Teaching content | Clear explanations, actionable takeaways |
| content_writer_trendy | Viral content | Trend-aware, culturally relevant |
| content_writer_controversial | Debate-sparking | Hot takes, respectful discourse |
| Agent Type | Purpose | Output |
|------------|---------|--------|
| frontend_developer | React/Vue/Angular | Component structure, state management |
| backend_developer | FastAPI/Flask/Django | API endpoints, auth, models |
| database_architect | Schema design | Tables, indexes, migrations |
| api_designer | REST/GraphQL | OpenAPI specs, rate limits |
| devops_engineer | CI/CD | Docker, K8s, pipelines |
| Agent Type | Purpose | Focus |
|------------|---------|-------|
| code_reviewer | Quality review | Best practices, maintainability |
| security_reviewer | Security scan | Vulnerabilities, threats |
| performance_reviewer | Optimization | Bottlenecks, complexity |
| accessibility_reviewer | WCAG compliance | A11y, screen readers |
| test_engineer | Test coverage | Unit/integration tests |
| Agent Type | Purpose |
|------------|---------|
| documentation_writer | READMEs, API docs, guides |
Agents created specifically for Jake's needs via agent_orchestrator research:
| Agent Type | Purpose | Key Features |
|------------|---------|--------------|
| travel_event_planner | Trip content coordination | Savannah/Atlanta/SD Pride planning, gear checklists, event schedules |
| donut_care_coordinator | Princess Donut management | Feeding tracking, vet reminders, pet sitter coordination, daily updates |
| pup_community_engager | Pup community management | Bluesky/Twitter monitoring, DM triage, authentic pup voice engagement |
| print_project_manager | 3D printing workflow | Model queue, filament tracking, vibecoding integration, print optimization |
| training_assistant | Almac work productivity | Training prep, onboarding, session checklists, material templates |
Total Agent Types: 25
| Agent Type | Purpose | What It Does |
|------------|---------|--------------|
| agent_creator | Designs new AI agents | Creates complete agent definitions with prompts, schemas, examples |
| agent_design_reviewer | Validates agent designs | Reviews quality, completeness, production readiness (scores 0-10) |
| agent_refiner | Improves agent designs | Applies fixes based on review feedback to reach target scores |
| agent_orchestrator | Master coordinator | Plans workflows, spawns agents, coordinates execution, compiles results |
The 4-Agent Hierarchy:
Level 4: USER
ā asks
Level 3: AGENT_ORCHESTRATOR
ā plans, spawns, coordinates
Level 2: Meta Agents (creator, reviewer, refiner)
ā designs, reviews, refines
Level 1: Task Agents (content writers, developers, QA)
ā does work
Level 0: Actual Tasks
Total Agent Types: 20
Workflow 1: Simple Creation (2 agents)
from ai_orchestrator import (
RealAIParallelOrchestrator,
create_meta_agent_workflow
)
orch = RealAIParallelOrchestrator()
# Define agents to create
new_agents = [
{'name': 'crypto_analyst', 'purpose': 'Analyze crypto trends'},
{'name': 'content_strategist', 'purpose': 'Plan content calendars'}
]
# Creates: 2 creators + 2 reviewers (4 tasks)
tasks = create_meta_agent_workflow(new_agents)
results = orch.run_parallel(tasks)
Workflow 2: Iterative Refinement (3-agent loop)
# The full 3-agent refinement workflow:
# Creator ā Reviewer (scores) ā Refiner (fixes) ā Reviewer (verifies)
# Repeats until score >= 8.5
agents_to_refine = [
{'name': 'my_agent', 'current_score': 7.4, 'target': 8.5}
]
# This runs the full loop automatically
results = orch.run_iterative_refinement(agents_to_refine)
# Result: 7.4 ā 8.5+ ā
Workflow 3: Orchestrated Mass Creation (autonomous)
# Spawn the orchestrator to handle everything:
# - Plans workflow
# - Spawns all agents
# - Coordinates execution
# - Handles refinements
# - Compiles final report
result = sessions_spawn(
task="Create 5 new agents and ensure all score 8.5+",
agent_type='agent_orchestrator',
timeout=600
)
# The orchestrator does everything autonomously!
This enables agent bootstrapping - the system creates and improves itself!
@dataclass
class AgentTask:
agent_type: str # Type from registry (required)
task_description: str # What to do (required)
input_data: Dict # Input parameters (optional)
task_id: str # Unique ID (auto-generated)
timeout_seconds: int # Max time (default: 120)
output_format: str # json|markdown|code|text
@dataclass
class AgentResult:
task_id: str # Matches AgentTask
agent_type: str # Agent that produced this
status: str # pending|running|completed|failed
output: Any # Generated content (agent-dependent format)
execution_time: float # Time taken
error: str # Error message if failed
session_key: str # Spawned session identifier
from ai_orchestrator import RealAIParallelOrchestrator, create_content_team
orch = RealAIParallelOrchestrator(max_concurrent=5)
tasks = create_content_team("Monday motivation", platform="bluesky")
# This spawns 5 REAL AI agents
results = orch.run_parallel(tasks)
print("Agents spawned! Each is generating content...")
print("Check sessions_list() to see running agents")
from ai_orchestrator import RealAIParallelOrchestrator, create_dev_team
orch = RealAIParallelOrchestrator(max_concurrent=5)
tasks = create_dev_team("TaskManager", ['auth', 'tasks', 'teams'])
# Spawns 5 dev agents in parallel
results = orch.run_parallel(tasks)
# Each agent designs their layer independently
# - Frontend agent designs React components
# - Backend agent designs FastAPI routes
# - Database agent designs schema
# - etc.
from ai_orchestrator import RealAIParallelOrchestrator, create_review_team
code = open('app.py').read()
orch = RealAIParallelOrchestrator(max_concurrent=5)
tasks = create_review_team(code)
# Spawns 5 reviewers simultaneously
results = orch.run_parallel(tasks)
# Each reviews from different angle:
# - Code quality
# - Security
# - Performance
# - Accessibility
# - Test coverage
from ai_orchestrator import (
RealAIParallelOrchestrator,
create_meta_agent_workflow
)
orch = RealAIParallelOrchestrator(max_concurrent=6)
# Define new agents to create
new_agents = [
{
'name': 'social_media_analyst',
'purpose': 'Analyze social media performance',
'domain': 'social media analytics',
'capabilities': ['engagement analysis', 'trend identification']
},
{
'name': 'bug_hunter',
'purpose': 'Find bugs in code',
'domain': 'software QA',
'capabilities': ['static analysis', 'edge case detection']
},
{
'name': 'api_documenter',
'purpose': 'Generate API docs',
'domain': 'technical writing',
'capabilities': ['endpoint extraction', 'example generation']
}
]
# Creates 6 tasks: 3 creators + 3 reviewers
tasks = create_meta_agent_workflow(new_agents)
results = orch.run_parallel(tasks)
# Result: 3 complete agent definitions + 3 quality reviews
# All created entirely by AI in parallel!
This is agent bootstrapping - the system creates itself!
Proven Capability: The system has been tested with 20 concurrent agents (10 creators + 10 reviewers) all spawned simultaneously.
from ai_orchestrator import RealAIParallelOrchestrator, AgentTask
orch = RealAIParallelOrchestrator(max_concurrent=10)
# Define 10 new agents to create
new_agents = [
{'name': 'engagement_optimizer', 'purpose': 'Analyze social media posts',
'domain': 'social media', 'capabilities': ['analytics', 'optimization']},
{'name': 'workout_designer', 'purpose': 'Create gym/home workouts',
'domain': 'fitness', 'capabilities': ['program design', 'adaptation']},
{'name': 'email_drafter', 'purpose': 'Write professional/personal emails',
'domain': 'communication', 'capabilities': ['tone adaptation', 'drafting']},
# ... more agents
]
# Create all 10 agents + 10 reviewers = 20 parallel agents!
all_tasks = []
for agent in new_agents:
# Add creator
all_tasks.append(AgentTask(
agent_type='agent_creator',
task_description=f"Design agent: {agent['name']}",
input_data=agent,
timeout_seconds=180
))
# Add reviewer
all_tasks.append(AgentTask(
agent_type='agent_design_reviewer',
task_description=f"Review {agent['name']}",
input_data={'agent_name': agent['name']},
timeout_seconds=120
))
# SPAWN 20 AGENTS SIMULTANEOUSLY
results = orch.run_parallel(all_tasks)
Real-World Results (2026-02-08 Test):
Practical Limit: ~20-50 concurrent agents (depends on system resources)
See: examples/mass_agent_creation.py for full implementation.
Agents return their output in their session transcript. To collect:
# After spawning, poll for results
from tools import sessions_list, sessions_history
# Check which agents have completed
sessions = sessions_list(agent_id_pattern="agent_*")
for session in sessions:
if session['status'] == 'completed':
history = sessions_history(session['sessionKey'])
# Parse JSON from final assistant message
output = json.loads(history[-1]['content'])
Note: Full result collection is implemented in the orchestrator.
Results are available via results attribute after spawning.
Previous implementations tried:
sessions_spawn is the solution:
~/.openclaw/skills/parallel-agents/
āāā README.md # Quick start guide
āāā SKILL.md # Complete documentation
āāā USAGE-GUIDE.md # Practical examples and patterns
āāā ai_orchestrator.py # Core orchestrator code
āāā helpers.py # Auto-retry helper functions
āāā examples/ # Working examples
āāā README.md # Examples documentation
āāā simple_parallel_research.py # Simple example
Cause: Not running inside OpenClaw session
Fix: Run your script inside OpenClaw
Cause: Outside OpenClaw environment
Fix: The sessions tool is only available inside OpenClaw
Cause: OpenClaw gateway not running
Fix: Start gateway: openclaw gateway start
No more simulation. No more templates. When you run this inside OpenClaw:
The agents don't just execute code ā they think, create, and analyze independently using genuine AI cognition.
Welcome to actual parallel AI. š
Built for OpenClaw using real sessions_spawn technology.
Part of the OpenClaw skill ecosystem.
Honest Edition: No simulation, just real AI.
Generated Mar 1, 2026
Agencies can use this skill to generate diverse content pieces simultaneously, such as blog posts, social media captions, and email newsletters, by spawning specialized writer agents for each client project. This reduces turnaround time and allows handling multiple campaigns in parallel.
Development teams can parallelize tasks like code review, documentation writing, and bug analysis by spawning agents for each function, speeding up project cycles. For example, one agent writes API docs while another reviews pull requests.
Firms can deploy agents to research different market segments or competitors concurrently, aggregating insights faster. Each agent can focus on a specific topic, such as consumer trends or product analysis, with results compiled automatically.
Creators can generate lesson plans, quizzes, and explanatory content in parallel for various subjects or grade levels, using agents tailored for educational writing. This supports rapid development of online courses or study materials.
Businesses can spawn agents to handle multiple customer inquiries or generate FAQ responses simultaneously, improving response times. Agents can be configured for different query types, like technical issues or billing questions.
Offer a subscription-based platform where users access parallel agent workflows for content creation or data analysis, charging per task or monthly fee. Revenue comes from tiered plans based on concurrent agent limits and features.
Provide managed services using the skill to deliver projects like marketing campaigns or research reports for clients, billing per project or hourly. Revenue is generated through service contracts and retainer agreements.
Develop a free tool with basic parallel agent capabilities, monetizing through premium features like advanced agent types, higher concurrency, or integration with other AI models. Revenue sources include upgrades and enterprise licenses.
š¬ Integration Tip
Ensure the OpenClaw gateway is running and call sessions_spawn directly from within an agent session, not as a standalone script, to avoid tool availability issues.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection