reprompterTransform messy prompts into well-structured, effective prompts ā single or multi-agent. Use when: "reprompt", "reprompt this", "clean up this prompt", "stru...
Install via ClawdBot CLI:
clawdbot install AytuncYildizli/reprompterYour prompt sucks. Let's fix that. Single prompts or full agent teams ā one skill, two modes.
| Mode | Trigger | What happens |
|------|---------|-------------|
| Single | "reprompt this", "clean up this prompt" | Interview ā structured prompt ā score |
| Repromptception | "reprompter teams", "repromptception", "run with quality", "smart run", "smart agents" | Plan team ā reprompt each agent ā tmux Agent Teams ā evaluate ā retry |
Auto-detection: if task mentions 2+ systems, "audit", or "parallel" ā ask: "This looks like a multi-agent task. Want to use Repromptception mode?"
Definition ā 2+ systems means at least two distinct technical domains that can be worked independently. Examples: frontend + backend, API + database, mobile app + backend, infrastructure + application code, security audit + cost audit.
Clarification: RePrompter does support code-related tasks (feature, bugfix, API, refactor) by generating better prompts. It does not directly apply code changes in Single mode. Direct code execution belongs to coding-agent unless Repromptception execution mode is explicitly requested.
AskUserQuestion with clickable options (2-5 questions max)After interview completes, IMMEDIATELY:
ā WRONG: Ask interview questions ā stop
ā
RIGHT: Ask interview questions ā generate prompt ā show score ā offer to execute
Ask via AskUserQuestion. Max 5 questions total.
Standard questions (priority order ā drop lower ones if task-specific questions are needed):
Task-specific questions (MANDATORY for compound prompts ā replace lower-priority standard questions):
| Signal | Suggested mode |
|--------|---------------|
| 2+ distinct systems (e.g., frontend + backend, API + DB, mobile + backend) | Team (Parallel) |
| Pipeline (fetch ā transform ā deploy) | Team (Sequential) |
| Single file/component | Single Agent |
| "audit", "review", "analyze" across areas | Team (Parallel) |
Enable when ALL true:
Force interview if ANY present: compound tasks ("and", "plus"), state management ("track", "sync"), vague modifiers ("better", "improved"), integration work ("connect", "combine", "sync"), broad scope nouns after any action verb, ambiguous pronouns ("it", "this", "that" without clear referent).
Detect task type from input. Each type has a dedicated template in docs/references/:
| Type | Template | Use when |
|------|----------|----------|
| Feature | feature-template.md | New functionality (default fallback) |
| Bugfix | bugfix-template.md | Debug + fix |
| Refactor | refactor-template.md | Structural cleanup |
| Testing | testing-template.md | Test writing |
| API | api-template.md | Endpoint/API work |
| UI | ui-template.md | UI components |
| Security | security-template.md | Security audit/hardening |
| Docs | docs-template.md | Documentation |
| Content | content-template.md | Blog posts, articles, marketing copy |
| Research | research-template.md | Analysis/exploration |
| Multi-Agent | swarm-template.md | Multi-agent coordination |
| Team Brief | team-brief-template.md | Team orchestration brief |
Priority (most specific wins): api > security > ui > testing > bugfix > refactor > content > docs > research > feature. For multi-agent tasks, use swarm-template for the team brief and the type-specific template for each agent's sub-prompt.
How it works: Read the matching template from docs/references/{type}-template.md, then fill it with task-specific context. Templates are NOT loaded into context by default ā only read on demand when generating a prompt. If the template file is not found, fall back to the Base XML Structure below.
To add a new task type: create docs/references/{type}-template.md following the XML structure below, then add it to the table above.
All templates follow this core structure (8 required tags). Use as fallback if no specific template matches:
Exception: team-brief-template.md uses Markdown format for orchestration briefs. This is intentional ā see template header for rationale.
<role>{Expert role matching task type and domain}</role>
<context>
- Working environment, frameworks, tools
- Available resources, current state
</context>
<task>{Clear, unambiguous single-sentence task}</task>
<motivation>{Why this matters ā priority, impact}</motivation>
<requirements>
- {Specific, measurable requirement 1}
- {At least 3-5 requirements}
</requirements>
<constraints>
- {What NOT to do}
- {Boundaries and limits}
</constraints>
<output_format>{Expected format, structure, length}</output_format>
<success_criteria>
- {Testable condition 1}
- {Measurable outcome 2}
</success_criteria>
Auto-detect tech stack from current working directory ONLY:
package.json, tsconfig.json, prisma/schema.prisma, etc.Raw task in ā quality output out. Every agent gets a reprompted prompt.
Phase 1: Score raw prompt, plan team, define roles (YOU do this, ~30s)
Phase 2: Write XML-structured prompt per agent (YOU do this, ~2min)
Phase 3: Launch tmux Agent Teams (AUTOMATED)
Phase 4: Read results, score, retry if needed (YOU do this)
Key insight: The reprompt phase costs ZERO extra tokens ā YOU write the prompts, not another AI.
/tmp/rpt-brief-{taskname}.md (use unique tasknames to avoid collisions between concurrent runs)For EACH agent:
docs/references/ (or use base XML structure): Specific expert title for THIS agent's domain: Add exact file paths (verified with ls), what OTHER agents handle (boundary awareness): At least 5 specific, independently verifiable requirements: Scope boundary with other agents, read-only vs write, file/directory boundaries: Exact path /tmp/rpt-{taskname}-{agent-domain}.md, required sections: Minimum N findings, file:line references, no hallucinated pathsScore each prompt ā target 8+/10. If under 8, add more context/constraints.
Write all to /tmp/rpt-agent-prompts-{taskname}.md
# 1. Start Claude Code with Agent Teams
tmux new-session -d -s {session} "cd /path/to/workdir && CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 claude --model opus"
# placeholders:
# - {session}: unique tmux session name (example: rpt-auth-audit)
# - /path/to/workdir: absolute repository path for the target project (example: /tmp/reprompter-check)
# 2. Wait for startup
sleep 12
# 3. Send prompt ā MUST use -l (literal), Enter SEPARATE
# IMPORTANT: Include POLLING RULES to prevent lead TaskList loop bug
tmux send-keys -t {session} -l 'Create an agent team with N teammates. CRITICAL: Use model opus for ALL tasks.
POLLING RULES ā YOU MUST FOLLOW THESE:
- After sending tasks, poll TaskList at most 10 times
- If ALL tasks show "done" status, IMMEDIATELY stop polling
- After 3 consecutive TaskList calls showing the same status, STOP polling regardless
- Once you stop polling: read the output files, then write synthesis
- DO NOT call TaskList more than 20 times total under any circumstances
Teammate 1 (ROLE): TASK. Write output to /tmp/rpt-{taskname}-{domain}.md. ... After all complete, synthesize into /tmp/rpt-{taskname}-final.md'
sleep 0.5
tmux send-keys -t {session} Enter
# 4. Monitor (poll every 15-30s)
tmux capture-pane -t {session} -p -S -100
# 5. Verify outputs
ls -la /tmp/rpt-{taskname}-*.md
# 6. Cleanup
tmux kill-session -t {session}
ā ļø WARNING: Default teammate model is HAIKU unless explicitly overridden. Always set --model opus in both CLI launch command and team prompt.
| Rule | Why |
|------|-----|
| Always send-keys -l (literal flag) | Without it, special chars break |
| Enter sent SEPARATELY | Combined fails for multiline |
| sleep 0.5 between text and Enter | Buffer processing time |
| sleep 12 after session start | Claude Code init time |
| --model opus in CLI AND prompt | Default teammate = HAIKU |
| Each agent writes own file | Prevents file conflicts |
| Unique taskname per run | Prevents collisions between concurrent sessions |
Accept checklist (use alongside score ā all must pass):
Delta prompt pattern:
Previous attempt scored 5/10.
ā
Good: Sections 1-3 complete
ā Missing: Section 4 empty, line references wrong
This retry: Focus on gaps. Verify all line numbers.
| Team size | Time | Cost |
|-----------|------|------|
| 2 agents | ~5-8 min | ~$1-2 |
| 3 agents | ~8-12 min | ~$2-3 |
| 4 agents | ~10-15 min | ~$2-4 |
Estimates cover Phase 3 (execution) only. Add ~3 minutes for Phases 1-2 and ~5-8 minutes per retry. Each agent uses ~25-70% of their 200K token context window.
When tmux/Claude Code is unavailable but running inside OpenClaw:
sessions_spawn(task: "<per-agent prompt>", model: "opus", label: "rpt-{role}")
Note: sessions_spawn is an OpenClaw-specific tool. Not available in standalone Claude Code.
No tmux or OpenClaw? Run agents sequentially: execute each agent's prompt one at a time in the same Claude Code session. Slower but works everywhere.
Always show before/after metrics:
| Dimension | Weight | Criteria |
|-----------|--------|----------|
| Clarity | 20% | Task unambiguous? |
| Specificity | 20% | Requirements concrete? |
| Structure | 15% | Proper sections, logical flow? |
| Constraints | 15% | Boundaries defined? |
| Verifiability | 15% | Success measurable? |
| Decomposition | 15% | Work split cleanly? (Score 10 if task is correctly atomic) |
| Dimension | Before | After | Change |
|-----------|--------|-------|--------|
| Clarity | 3/10 | 9/10 | +200% |
| Specificity | 2/10 | 8/10 | +300% |
| Structure | 1/10 | 10/10 | +900% |
| Constraints | 0/10 | 7/10 | new |
| Verifiability | 2/10 | 8/10 | +300% |
| Decomposition | 0/10 | 8/10 | new |
| **Overall** | **1.45/10** | **8.35/10** | **+476%** |
Bias note: Scores are self-assessed. Treat as directional indicators, not absolutes.
For both modes, RePrompter supports post-execution evaluation:
Prompts should be less prescriptive about HOW. Focus on WHAT ā clear task, requirements, constraints, success criteria. Let the model's own reasoning handle execution strategy.
Example: Instead of "Step 1: read the file, Step 2: extract the function" ā "Extract the authentication logic from auth.ts into a reusable middleware. Requirements: ..."
Prefill assistant response start to enforce format:
{ ā forces JSON output## Analysis ā skips preamble, starts with content| Column | ā forces table formatGenerated prompts should COMPLEMENT runtime context (CLAUDE.md, skills, MCP tools), not duplicate it. Before generating:
Keep generated prompts under ~2K tokens for single mode, ~1K per agent for Repromptception. Longer prompts waste context window without improving quality. If a prompt exceeds budget, split into phases or move detail into constraints.
Always include explicit permission for the model to express uncertainty rather than fabricate:
Note: CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS is an experimental flag that may change in future Claude Code versions. Check Claude Code docs for current status.
In ~/.claude/settings.json:
{
"env": {
"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
},
"preferences": {
"teammateMode": "tmux",
"model": "opus"
}
}
| Setting | Values | Effect |
|---------|--------|--------|
| CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS | "1" | Enables agent team spawning |
| teammateMode | "tmux" / "default" | tmux: each teammate gets a visible split pane. default: teammates run in background |
| model | "opus" / "sonnet" | Teammates default to Haiku. Always set model: opus explicitly in your prompt ā do not rely on runtime defaults. |
Rough crypto dashboard prompt: 1.6/10 ā 9.0/10 (+462%)
3 Opus agents, sequential pipeline (PromptAnalyzer ā PromptEngineer ā QualityAuditor):
| Metric | Value |
|--------|-------|
| Original score | 2.15/10 |
| After Repromptception | 9.15/10 (+326%) |
| Quality audit | PASS (99.1%) |
| Weaknesses found ā fixed | 24/24 (100%) |
| Cost | $1.39 |
| Time | ~8 minutes |
Same audit task, 4 Opus agents:
| Metric | Raw | Repromptception | Delta |
|--------|-----|----------------|-------|
| CRITICAL findings | 7 | 14 | +100% |
| Total findings | ~40 | 104 | +160% |
| Cost savings identified | $377/mo | $490/mo | +30% |
| Token bloat found | 45K | 113K | +151% |
| Cross-validated findings | 0 | 5 | ā |
See TESTING.md for 13 verification scenarios + anti-pattern examples.
Templates may add domain-specific tags beyond the 8 required base tags. Always include all base tags first.
| Extended Tag | Used In | Purpose |
|-------------|---------|---------|
| | bugfix | What the user sees, error messages |
| | bugfix | Systematic debugging steps |
| | api | Endpoint specifications |
| | ui | Component props, states, layout |
| | swarm | Agent role definitions |
| | swarm | Work split per agent |
| | swarm | Inter-agent handoff rules |
| | research | Specific questions to answer |
| | research | Research approach and methods |
| | research | Reasoning notes space (non-sensitive, concise) |
| | refactor | Before state of the code |
| | refactor | Desired after state |
| | testing | What needs test coverage |
| | security | Threat landscape and vectors |
| | docs | Document organization |
| | docs | Source material to reference |
Generated Mar 1, 2026
A development team needs to create detailed implementation prompts for a new authentication feature involving frontend UI components, backend API endpoints, and database schema changes. RePrompter's multi-agent mode can generate parallel prompts for each technical domain, ensuring all team members receive clear, structured instructions with quality scoring.
A marketing agency needs to generate coordinated content prompts for blog posts, social media graphics, and email newsletters for a product launch campaign. RePrompter can structure multi-agent prompts that ensure consistent messaging across different content formats while maintaining brand voice and campaign objectives.
A financial institution needs to audit transaction records, security protocols, and regulatory documentation simultaneously. RePrompter's parallel team mode can generate specialized prompts for each audit area, ensuring comprehensive coverage while maintaining audit trail documentation and quality standards.
An online retailer needs to integrate payment processing, inventory management, and customer relationship management systems. RePrompter can generate sequential prompts for each integration phase, ensuring dependencies are properly managed and each system receives clear implementation instructions.
An educational institution needs to create structured prompts for developing video lectures, assessment questions, and interactive exercises for a new online course. RePrompter can generate coordinated prompts that ensure learning objectives are consistently addressed across different content types.
Offer RePrompter as a cloud-based service with tiered subscription plans based on usage volume, team size, and advanced features like multi-agent orchestration. Include enterprise plans with API access, custom templates, and dedicated support for large organizations.
Provide professional services to help organizations implement RePrompter within their workflows, including custom template development, team training, and integration with existing AI tools. Offer ongoing optimization and prompt engineering support contracts.
Create a marketplace where users can buy, sell, and share specialized prompt templates for different industries and use cases. Generate revenue through template sales commissions, premium template licensing fees, and certification programs for template creators.
š¬ Integration Tip
Integrate RePrompter into existing development workflows by using its API to automatically generate structured prompts from JIRA tickets or GitHub issues, ensuring consistency across team members and reducing prompt engineering overhead.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection