agent-self-governanceSelf-governance protocol for autonomous agents: WAL (Write-Ahead Log), VBR (Verify Before Reporting), ADL (Anti-Divergence Limit), and VFM (Value-For-Money)....
Install via ClawdBot CLI:
clawdbot install bowen31337/agent-self-governanceFive protocols that prevent agent failure modes: losing context, false completion claims, persona drift, wasteful spending, and infrastructure amnesia.
Rule: Write before you respond. If something is worth remembering, WAL it first.
| Trigger | Action Type | Example |
|---------|------------|---------|
| User corrects you | correction | "No, use Podman not Docker" |
| Key decision | decision | "Using CogVideoX-2B for text-to-video" |
| Important analysis | analysis | "WAL patterns should be core infra not skills" |
| State change | state_change | "GPU server SSH key auth configured" |
# Write before responding
python3 scripts/wal.py append <agent_id> correction "Use Podman not Docker"
# Working buffer (batch, flush before compaction)
python3 scripts/wal.py buffer-add <agent_id> decision "Some decision"
python3 scripts/wal.py flush-buffer <agent_id>
# Session start: replay lost context
python3 scripts/wal.py replay <agent_id>
# After incorporating a replayed entry
python3 scripts/wal.py mark-applied <agent_id> <entry_id>
# Maintenance
python3 scripts/wal.py status <agent_id>
python3 scripts/wal.py prune <agent_id> --keep 50
replay to recover lost contextappend BEFORE respondingflush-buffer then write daily memorybuffer-add for less critical itemsRule: Don't say "done" until verified. Run a check before claiming completion.
# Verify a file exists
python3 scripts/vbr.py check task123 file_exists /path/to/output.py
# Verify a file was recently modified
python3 scripts/vbr.py check task123 file_changed /path/to/file.go
# Verify a command succeeds
python3 scripts/vbr.py check task123 command "cd /tmp/repo && go test ./..."
# Verify git is pushed
python3 scripts/vbr.py check task123 git_pushed /tmp/repo
# Log verification result
python3 scripts/vbr.py log <agent_id> task123 true "All tests pass"
# View pass/fail stats
python3 scripts/vbr.py stats <agent_id>
check command "go test ./..."check file_exists /pathcheck git_pushed /repoRule: Stay true to your persona. Track behavioral drift from SOUL.md.
# Analyze a response for anti-patterns
python3 scripts/adl.py analyze "Great question! I'd be happy to help you with that!"
# Log a behavioral observation
python3 scripts/adl.py log <agent_id> anti_sycophancy "Used 'Great question!' in response"
python3 scripts/adl.py log <agent_id> persona_direct "Shipped fix without asking permission"
# Calculate divergence score (0=aligned, 1=fully drifted)
python3 scripts/adl.py score <agent_id>
# Check against threshold
python3 scripts/adl.py check <agent_id> --threshold 0.7
# Reset after recalibration
python3 scripts/adl.py reset <agent_id>
Rule: Track cost vs value. Don't burn premium tokens on budget tasks.
# Log a completed task with cost
python3 scripts/vfm.py log <agent_id> monitoring glm-4.7 37000 0.03 0.8
# Calculate VFM scores
python3 scripts/vfm.py score <agent_id>
# Cost breakdown by model and task
python3 scripts/vfm.py report <agent_id>
# Get optimization suggestions
python3 scripts/vfm.py suggest <agent_id>
| Task Type | Recommended Tier | Models |
|-----------|-----------------|--------|
| Monitoring, formatting, summarization | Budget | GLM, DeepSeek, Haiku |
| Code generation, debugging, creative | Standard | Sonnet, Gemini Pro |
| Architecture, complex analysis | Premium | Opus, Sonnet+thinking |
suggest for optimization tipsreport for cost breakdownRule: Log infrastructure facts immediately. When you discover hardware specs, service configs, or network topology, write it down BEFORE continuing.
| Discovery Type | Log To | Example |
|----------------|--------|---------|
| Hardware specs | TOOLS.md | "GPU server has 3 GPUs: RTX 3090 + 3080 + 2070 SUPER" |
| Service configs | TOOLS.md | "ComfyUI runs on port 8188, uses /data/ai-stack" |
| Network topology | TOOLS.md | "Pi at 192.168.99.25, GPU server at 10.0.0.44" |
| Credentials/auth | memory/encrypted/ | "SSH key: ~/.ssh/id_ed25519_alexchen" |
| API endpoints | TOOLS.md or skill | "Moltbook API: POST /api/v1/posts" |
# Hardware discovery
nvidia-smi --query-gpu=index,name,memory.total --format=csv
lscpu | grep -E "Model name|CPU\(s\)|Thread"
free -h
df -h
# Service discovery
systemctl list-units --type=service --state=running
docker ps # or podman ps
ss -tlnp | grep LISTEN
# Network discovery
ip addr show
cat /etc/hosts
β "The GPU server has 3 GPUs" (only in conversation)
β "The GPU server has 3 GPUs" β Update TOOLS.md β then continue
Memory is limited. Files are permanent. IKL before you forget.
AI Usage Analysis
Analysis is being generated⦠refresh in a few seconds.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection