Logo
ClawHub Skills Lib
HomeCategoriesUse CasesTrendingBlog
ClawHub Skills Lib
ClawHub Skills Lib

Browse 15,000+ community-built AI agent skills for OpenClaw. Updated daily from clawhub.ai.

Explore

  • Home
  • Trending
  • Use Cases
  • Blog

Categories

  • Development
  • AI & Agents
  • Productivity
  • Communication
  • Data & Research
  • Business
  • Platforms
  • Lifestyle
  • Education
  • Design

Use Cases

  • Security Auditing
  • Workflow Automation
  • Finance & Fintech
  • MCP Integration
  • Crypto Trading
  • Web3 & DeFi
  • Data Analysis
  • Social Media
  • 中文平台技能
  • All Use Cases →
© 2026 ClawHub Skills Lib. All rights reserved.Built with Next.js · Supabase · Prisma
Home/Blog/self-improving-agent: The OpenClaw Skill That Teaches Your AI to Learn From Its Own Mistakes
skill-spotlightagent-frameworksclawhubopenclawself-improving-agent

self-improving-agent: The OpenClaw Skill That Teaches Your AI to Learn From Its Own Mistakes

March 5, 2026·7 min read

In just two months since its release, self-improving-agent by pskoett has become the most-starred skill on Clawhub — 1,100+ stars, 90,000+ downloads, and over 1,000 installs as of March 2026. For a community-built AI agent skill, these are extraordinary numbers.

So what does it actually do, and why has it captured the community's attention so quickly?


The Problem It Solves

Every developer who uses Claude Code or OpenClaw for daily work knows the frustration: you correct the agent on something — maybe it keeps using the wrong path format, or it doesn't know about a project-specific convention — and the next session, it makes the same mistake again.

AI agents have no persistent memory across sessions by default. Every conversation starts fresh. You end up repeating the same corrections, writing the same clarifications in your prompts, and watching the agent rediscover the same dead ends.

self-improving-agent attacks this problem directly. Instead of relying on external memory plugins or manually curated CLAUDE.md files, it turns the agent itself into the curator — automatically documenting failures, corrections, and discoveries as they happen.


Core Concept: The Learning Loop

The skill introduces a structured feedback loop built entirely on markdown files stored in your project's .learnings/ directory:

.learnings/
├── LEARNINGS.md       # Corrections, insights, knowledge gaps, best practices
├── ERRORS.md          # Command failures and exceptions
└── FEATURE_REQUESTS.md # Capabilities users wished existed

Every time something noteworthy happens during a session — a command fails, the user corrects the agent, a better approach is discovered — the agent logs a structured entry into the appropriate file. These files persist across sessions, giving the agent a growing knowledge base specific to your project and workflow.

This is deceptively simple. Markdown files, not vector databases. No external APIs. No embedding pipelines. The agent reads its own history before starting work, and writes to it when it learns something new.


Inside the Entry Format

Each logged item follows a strict schema that makes the knowledge machine-readable and human-reviewable:

## LRN-20260215-001
**Priority:** high
**Status:** resolved
**Area:** backend
**Summary:** Prisma `queryRawUnsafe` throws syntax error on empty tsquery string

When sanitizing Chinese input, stripping non-ASCII characters produces an empty
string. Passing this to `to_tsquery()` generates `':*'` which PostgreSQL rejects
with error code 42601.

**Fix:** Check `qTerms.length > 0` before building the tsquery branch.
Fall through to ILIKE-only path for non-ASCII queries.

**See Also:** ERR-20260215-002

The structured fields serve specific purposes:

  • Priority (low / medium / high / critical) — determines how urgently a learning needs to be promoted to permanent project knowledge
  • Status (pending / in_progress / resolved / won't_fix / promoted / promoted_to_skill) — tracks the lifecycle of each entry
  • Area tags (frontend / backend / infra / tests / docs / config) — allows filtering by domain
  • See Also links — connects related entries, enabling the agent to detect recurring patterns

The Promotion System: From Temporary to Permanent

The most powerful feature isn't the logging itself — it's what happens when a learning becomes important enough to promote.

When an entry accumulates multiple "See Also" links (meaning the same issue keeps recurring), or when the user explicitly flags it, the agent can promote that learning to a permanent location:

  • CLAUDE.md — Project-level instructions that Claude Code reads at session start
  • AGENTS.md — OpenClaw workspace instructions
  • .github/copilot-instructions.md — For GitHub Copilot users

This creates a ratchet effect: every corrected mistake is a one-time cost. The learning becomes part of the project's permanent DNA.

There's also skill extraction — if a learning represents a genuinely reusable solution, the agent can scaffold it into a new Clawhub skill via the extract-skill.sh hook script, contributing back to the community.


Hook Integration: Making It Automatic

The skill installs three shell hooks that make logging feel seamless rather than like extra work:

activator.sh (UserPromptSubmit)

Fires after every user prompt. Reminds the agent to evaluate whether the current interaction warrants a learning entry. The overhead is roughly 50–100 tokens per prompt — negligible compared to typical session lengths.

error-detector.sh (PostToolUse)

Triggers automatically when a Bash command exits with a non-zero code. Captures the failed command, error output, and context without requiring the user to do anything.

extract-skill.sh

Called manually or triggered when a learning is marked for extraction. Creates a skill scaffold from the resolved entry, including the metadata format required for Clawhub publishing.

In Claude Code, these hooks live in .claude/settings.json:

{
  "hooks": {
    "UserPromptSubmit": [
      { "command": ".learnings/hooks/activator.sh" }
    ],
    "PostToolUse": [
      {
        "matcher": "Bash",
        "command": ".learnings/hooks/error-detector.sh"
      }
    ]
  }
}

Multi-Platform Support

One of the reasons for self-improving-agent's broad adoption is that it works across the AI coding assistant landscape:

OpenClaw (Primary)

Workspace-based injection with automatic skill loading. Supports inter-agent messaging, meaning learnings can be shared across multiple agent sessions running in parallel — a feature unique to OpenClaw's session model.

Claude Code

Configuration via .claude/settings.json with the hook scripts above. The .learnings/ files are read at session start via a CLAUDE.md import directive.

GitHub Copilot

Manual activation by including .learnings/ summaries in .github/copilot-instructions.md. Less automated, but still useful for capturing project-specific knowledge.


Why It Went Viral

Several factors converged to make this skill take off:

1. It solves a universal pain point. Every AI coding workflow has the "same mistake, next session" problem. The skill offers a concrete solution with zero external dependencies.

2. The approach is transparent. All state lives in plain markdown files that you can read, edit, or delete. There's no black box. Developers trust what they can see.

3. It compounds over time. A project with six months of self-improving-agent running is meaningfully smarter than one with six days. The value proposition gets stronger the longer you use it.

4. The community loop. The skill-extraction feature means that a particularly good learning can become a published Clawhub skill. Early adopters are contributing back, which generates discussion and accelerates adoption.


How to Install

# Via Clawhub CLI
clawhub install pskoett/self-improving-agent
 
# Or find it in the directory
# https://clawhub-skills.com/skills/self-improving-agent

After installation, the skill creates the .learnings/ directory structure and installs the hook scripts. The CLAUDE.md or AGENTS.md in your project gets an import directive pointing to the learnings files.


Practical Tips

Start a new project? Install it immediately. The longer it runs, the more valuable it becomes. Adding it after six months of accumulated mistakes means you're starting from a clean slate.

Review .learnings/LEARNINGS.md weekly. The agent logs entries automatically, but periodically promoting high-value learnings to CLAUDE.md manually ensures the most important knowledge becomes permanent.

Use area tags consistently. When the agent asks for an area tag, resist the urge to skip it. Proper tagging makes it possible to filter learnings when debugging domain-specific issues.

Watch the priority field. If multiple entries share "See Also" links and all have high priority, that's a signal to investigate the root cause rather than just working around symptoms.


Considerations

No skill is perfect. A few things to keep in mind:

  • Token overhead. The activator.sh hook adds ~50–100 tokens per prompt. In long sessions with tight context budgets, this can add up. Consider disabling it on sessions where context efficiency is critical.
  • Noisy logging. Early in a project, the agent may log entries for things that aren't genuinely noteworthy. The .learnings/ files benefit from occasional pruning.
  • Version 1.0.11 is actively maintained. pskoett has shipped 11 patch versions since January, which is a good sign — but it also means the schema and hook scripts can change. Pin your version if stability matters.

The Bigger Picture

self-improving-agent represents a shift in how we think about AI coding assistants. Instead of treating the agent as a stateless tool that you configure once and hope for the best, it positions the agent as a colleague that accumulates project-specific expertise over time.

The insight is simple: the agent is already making mistakes and learning from corrections in every session. All the skill does is make that process explicit, persistent, and structured.

That's why 1,100 developers starred it in two months.


View the skill on ClawHub: self-improving-agent

← Back to Blog