persistent-memoryThree-layer persistent memory system (Markdown + ChromaDB vectors + NetworkX knowledge graph) for long-term agent recall across sessions. One-command setup w...
Install via ClawdBot CLI:
clawdbot install Jakebot-ops/persistent-memoryAdds persistent three-layer memory to any OpenClaw workspace. The agent gains semantic recall across sessions β decisions, facts, lessons, and institutional knowledge survive restarts.
| Layer | Technology | Purpose |
|-------|-----------|---------|
| L1: Markdown | MEMORY.md + daily logs + reference/ | Human-readable curated knowledge |
| L2: Vector | ChromaDB + all-MiniLM-L6-v2 | Semantic search across all memories |
| L3: Graph | NetworkX | Relationship traversal between concepts |
All three layers sync together. The indexer updates L2 and L3 from L1 automatically.
Problem: OpenClaw has its own built-in memory search system, but by default it only indexes MEMORY.md and memory/*.md files. Critical workspace files like SOUL.md (agent directives), AGENTS.md (behavior rules), and PROJECTS.md (active work) are ignored.
Impact: Agents can violate explicit directives because they're not found in memory searches. This causes operational failures where agents ignore their own rules.
Solution: The configure_openclaw.py script adds a memorySearch configuration block to OpenClaw that indexes all critical workspace files. This makes directive compliance automatic rather than optional.
One command from workspace root:
bash skills/persistent-memory/scripts/unified_setup.sh
This automatically:
No manual configuration needed. The script handles everything including OpenClaw integration that prevents agents from ignoring workspace directives (SOUL.md, AGENTS.md, etc.).
vector_memory/venv/bin/python vector_memory/indexer.py
The indexer parses MEMORY.md, reference/.md, and memory/.md into vector embeddings and rebuilds the knowledge graph. Run after every edit to keep layers in sync.
vector_memory/venv/bin/python vector_memory/search.py "your query"
Returns top-3 semantically similar chunks with source file and section.
vector_memory/venv/bin/python vector_memory/auto_retrieve.py --status
Reports sync health: MEMORY.md hash vs indexed state, chunk count, graph size. Use in heartbeats to detect drift.
Add these to AGENTS.md or SOUL.md:
Before answering questions about prior work, decisions, dates, people, or preferences β search memory first. Use memory_search or run auto_retrieve.py with the query. Never say "I don't remember" without checking.
CRITICAL: OpenClaw's built-in memory search should now automatically find directive files (SOUL.md, AGENTS.md) if configure_openclaw.py was run. If memory searches are not finding agent rules or workspace directives, the OpenClaw integration is missing or broken.
Before executing any action that references an external identifier (URL, handle, email, repo name, address) β query reference/ files for the exact value. If not found, query vector memory. If still not found, ask the user. Never fabricate identifiers.
After editing MEMORY.md or any file in reference/ or memory/ β re-index:
vector_memory/venv/bin/python vector_memory/indexer.py
Add to HEARTBEAT.md:
## Memory Sync Check
Run `vector_memory/venv/bin/python vector_memory/auto_retrieve.py --status` and if status is OUT_OF_SYNC, re-index with `vector_memory/venv/bin/python vector_memory/indexer.py`.
Create reference/ in the workspace root as the agent's institutional knowledge base:
reference/
βββ people.md β Contacts, roles, communication details
βββ repos.md β GitHub repositories, URLs, status
βββ infrastructure.md β Hosts, IPs, ports, services
βββ business.md β Company info, strategies, rules
βββ properties.md β Domain-specific entities (deals, products, etc.)
These files are vector-indexed alongside MEMORY.md. The agent queries them before any action involving external identifiers. Facts accumulate over time β the agent that never forgets.
workspace/
βββ MEMORY.md β Curated long-term memory (L1)
βββ memory/
β βββ 2026-02-17.md β Daily log
β βββ heartbeat-state.json β Sync tracking
βββ reference/ β Institutional knowledge (optional)
β βββ people.md
β βββ ...
βββ vector_memory/
βββ indexer.py β Index all markdown into vectors + graph
βββ search.py β Semantic search CLI
βββ graph.py β NetworkX knowledge graph
βββ auto_retrieve.py β Status checker + auto-retrieval
βββ chroma_db/ β Vector database (gitignored)
βββ memory_graph.json β Knowledge graph (auto-generated)
βββ venv/ β Python venv (gitignored)
source vector_memory/venv/bin/activatevector_memory/venv/bin/python vector_memory/indexer.pypython skills/persistent-memory/scripts/configure_openclaw.py to fix.openclaw config get | grep memorySearchopenclaw gateway restartGenerated Feb 23, 2026
A customer support agent uses persistent memory to recall past interactions, solutions, and customer preferences across sessions, enabling personalized and consistent support without manual lookup. This reduces resolution time and improves customer satisfaction by avoiding repetitive questions.
A legal research assistant leverages the three-layer memory to store case laws, precedents, and client details, allowing semantic search and relationship mapping for quick retrieval during case preparation. This ensures compliance with legal directives and reduces errors in referencing critical information.
A project management coordinator uses persistent memory to track project decisions, timelines, and team communications across sessions, maintaining institutional knowledge for ongoing and future projects. This prevents loss of context during team handovers or system restarts.
A healthcare compliance officer employs persistent memory to store regulatory guidelines, patient protocols, and audit trails, enabling automatic recall of rules and past decisions to ensure adherence to health standards. This minimizes risks of non-compliance and operational failures.
Offer persistent memory as a cloud-based service with tiered pricing based on storage capacity and search frequency, targeting businesses needing long-term AI agent memory. Revenue is generated through monthly or annual subscriptions, with upsells for advanced features like analytics.
Sell on-premise licenses for large organizations requiring secure, customized memory systems integrated with existing workflows, such as legal firms or healthcare providers. Revenue comes from one-time license fees plus annual support and maintenance contracts.
Provide consulting services to set up and customize persistent memory for specific industries, including training and ongoing management, leveraging the one-command setup for rapid deployment. Revenue is generated through project-based fees and retainer agreements.
π¬ Integration Tip
Ensure the configure_openclaw.py script runs successfully to integrate memory search with OpenClaw, preventing agents from ignoring critical directive files like SOUL.md and AGENTS.md for compliance.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection