agent-developmentDesign and build custom Claude Code agents with effective descriptions, tool access patterns, and self-documenting prompts. Covers Task tool delegation, model selection, memory limits, and declarative instruction design. Use when: creating custom agents, designing agent descriptions for auto-delegation, troubleshooting agent memory issues, or building agent pipelines.
Install via ClawdBot CLI:
clawdbot install Veeramanikandanr48/agent-developmentBuild effective custom agents for Claude Code with proper delegation, tool access, and prompt design.
The description field determines whether Claude will automatically delegate tasks.
---
name: agent-name
description: |
[Role] specialist. MUST BE USED when [specific triggers].
Use PROACTIVELY for [task category].
Keywords: [trigger words]
tools: Read, Write, Edit, Glob, Grep, Bash
model: sonnet
---
| Weak (won't auto-delegate) | Strong (auto-delegates) |
|---------------------------|-------------------------|
| "Analyzes screenshots for issues" | "Visual QA specialist. MUST BE USED when analyzing screenshots. Use PROACTIVELY for visual QA." |
| "Runs Playwright scripts" | "Playwright specialist. MUST BE USED when running Playwright scripts. Use PROACTIVELY for browser automation." |
Key phrases:
Task tool subagent_type: "agent-name" - always worksSession restart required after creating/modifying agents.
If an agent doesn't need Bash, don't give it Bash.
| Agent needs to... | Give tools | Don't give |
|-------------------|------------|------------|
| Create files only | Read, Write, Edit, Glob, Grep | Bash |
| Run scripts/CLIs | Read, Write, Edit, Glob, Grep, Bash | — |
| Read/audit only | Read, Glob, Grep | Write, Edit, Bash |
Why? Models default to cat > file << 'EOF' heredocs instead of Write tool. Each bash command requires approval, causing dozens of prompts per agent run.
Instead of restricting Bash, allowlist safe commands in .claude/settings.json:
{
"permissions": {
"allow": [
"Write", "Edit", "WebFetch(domain:*)",
"Bash(cd *)", "Bash(cp *)", "Bash(mkdir *)", "Bash(ls *)",
"Bash(cat *)", "Bash(head *)", "Bash(tail *)", "Bash(grep *)",
"Bash(diff *)", "Bash(mv *)", "Bash(touch *)", "Bash(file *)"
]
}
}
Don't downgrade quality to work around issues - fix root causes instead.
| Model | Use For |
|-------|---------|
| Opus | Creative work (page building, design, content) - quality matters |
| Sonnet | Most agents - content, code, research (default) |
| Haiku | Only script runners where quality doesn't matter |
Add to ~/.bashrc or ~/.zshrc:
export NODE_OPTIONS="--max-old-space-size=16384"
Increases Node.js heap from 4GB to 16GB.
| Agent Type | Max Parallel | Notes |
|------------|--------------|-------|
| Any agents | 2-3 | Context accumulates; batch then pause |
| Heavy creative (Opus) | 1-2 | Uses more memory |
source ~/.bashrc or restart terminalNODE_OPTIONS="--max-old-space-size=16384" claudeAlways prefer Task sub-agents over remote API calls.
| Aspect | Remote API Call | Task Sub-Agent |
|--------|-----------------|----------------|
| Tool access | None | Full (Read, Grep, Write, Bash) |
| File reading | Must pass all content in prompt | Can read files iteratively |
| Cross-referencing | Single context window | Can reason across documents |
| Decision quality | Generic suggestions | Specific decisions with rationale |
| Output quality | ~100 lines typical | 600+ lines with specifics |
// ❌ WRONG - Remote API call
const response = await fetch('https://api.anthropic.com/v1/messages', {...})
// ✅ CORRECT - Use Task tool
// Invoke Task with subagent_type: "general-purpose"
Describe what to accomplish, not how to use tools.
### Check for placeholdersbash
grep -r "PLACEHOLDER:" build/*.html
### Check for placeholders
Search all HTML files in build/ for:
- PLACEHOLDER: comments
- TODO or TBD markers
- Template brackets like [Client Name]
Any match = incomplete content.
| Include | Skip |
|---------|------|
| Task goal and context | Explicit bash/tool commands |
| Input file paths | "Use X tool to..." |
| Output file paths and format | Step-by-step tool invocations |
| Success/failure criteria | Shell pipeline syntax |
| Blocking checks (prerequisites) | Micromanaged workflows |
| Quality checklists | |
"Agents that won't have your context must be able to reproduce the behaviour independently."
Every improvement must be encoded into the agent's prompt, not left as implicit knowledge.
| Discovery | Where to Capture |
|-----------|------------------|
| Bug fix pattern | Agent's "Corrections" or "Common Issues" section |
| Quality requirement | Agent's "Quality Checklist" section |
| File path convention | Agent's "Output" section |
| Tool usage pattern | Agent's "Process" section |
| Blocking prerequisite | Agent's "Blocking Check" section |
Before completing any agent improvement:
| Anti-Pattern | Why It Fails |
|--------------|--------------|
| "As we discussed earlier..." | No prior context exists |
| Relying on files read during dev | Agent may not read same files |
| Assuming knowledge from errors | Agent won't see your debugging |
| "Just like the home page" | Agent hasn't built home page |
Effective agent prompts include:
## Your Role
[What the agent does]
## Blocking Check
[Prerequisites that must exist]
## Input
[What files to read]
## Process
[Step-by-step with encoded learnings]
## Output
[Exact file paths and formats]
## Quality Checklist
[Verification steps including learned gotchas]
## Common Issues
[Patterns discovered during development]
When inserting a new agent into a numbered pipeline (e.g., HTML-01 → HTML-05 → HTML-11):
| Must Update | What |
|-------------|------|
| New agent | "Workflow Position" diagram + "Next" field |
| Predecessor agent | Its "Next" field to point to new agent |
Common bug: New agent is "orphaned" because predecessor still points to old next agent.
Verification:
grep -n "Next:.*→\|Then.*runs next" .claude/agents/*.md
Best use case: Tasks that are repetitive but require judgment.
Example: Auditing 70 skills manually = tedious. But each audit needs intelligence (check docs, compare versions, decide what to fix). Perfect for parallel agents with clear instructions.
Not good for:
For each [item]:
1. Read [source file]
2. Verify with [external check - npm view, API call, etc.]
3. Check [authoritative source]
4. Score/evaluate
5. FIX issues found ← Critical instruction
Key elements:
1. ME: Launch 2-3 parallel agents with identical prompt, different item lists
2. AGENTS: Work in parallel (read → verify → check → edit → report)
3. AGENTS: Return structured reports (score, status, fixes applied, files modified)
4. ME: Review changes (git status, spot-check diffs)
5. ME: Commit in batches with meaningful changelog
6. ME: Push and update progress tracking
Why agents don't commit: Allows human review, batching, and clean commit history.
Good fit:
Bad fit:
---
name: my-agent
description: |
[Role] specialist. MUST BE USED when [triggers].
Use PROACTIVELY for [task category].
Keywords: [trigger words]
tools: Read, Write, Edit, Glob, Grep, Bash
model: sonnet
---
.claude/settings.jsonexport NODE_OPTIONS="--max-old-space-size=16384"
source ~/.bashrc && claude
Generated Mar 1, 2026
An e-commerce company wants to build a custom agent to automate product listing updates and inventory management. This involves creating an agent with strong delegation triggers for tasks like data entry and quality checks, using the Write and Edit tools for file modifications, and ensuring declarative instructions for consistency across team members.
A software development firm needs an agent to automate code reviews and run tests in CI/CD pipelines. The agent requires a strong description pattern for auto-delegation on code analysis tasks, tool access limited to Read, Grep, and Bash for script execution, and self-documenting prompts to capture bug fix patterns for future sessions.
A marketing agency aims to create an agent for generating and optimizing web content with SEO best practices. This involves using the Opus model for creative quality, declarative instructions for content guidelines, and memory limit fixes to handle large document processing without crashes during batch operations.
A financial institution seeks an agent to analyze transaction data and generate compliance reports. The agent must have a strong trigger pattern for data auditing tasks, tool access restricted to Read, Glob, and Grep for security, and integration with Task sub-agents over remote APIs to ensure high-quality, cross-referenced outputs.
An online education platform wants an agent to curate and update learning materials. This requires designing the agent with proactive use keywords for content management, using the Sonnet model for balanced quality, and encoding file path conventions in the prompt to maintain consistency across multiple content creators.
Offer pre-built agent templates with strong description patterns and tool configurations as a subscription service. Customers can customize these templates for their specific use cases, reducing development time and ensuring best practices in delegation and memory management.
Provide expert services to design and implement custom AI agents for enterprises. This includes creating declarative prompts, optimizing tool access, and fixing memory issues, with revenue generated through project-based fees or retainer agreements for ongoing support and improvements.
Develop and sell training courses and certifications on agent development best practices. Cover topics like delegation mechanisms, model selection, and self-documentation principles, targeting developers and businesses looking to upskill their teams in AI agent deployment.
💬 Integration Tip
Integrate this skill by starting with declarative prompts for clear task goals, using Task sub-agents over remote APIs to leverage full tool access, and applying memory limit fixes to prevent crashes during parallel agent runs.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection