elite-longterm-memoryUltimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Install via ClawdBot CLI:
clawdbot install NextFrontierBuilds/elite-longterm-memoryThe ultimate memory system for AI agents. Combines 6 proven approaches into one bulletproof architecture.
Never lose context. Never forget decisions. Never repeat mistakes.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ELITE LONGTERM MEMORY ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā¤
ā ā
ā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā ā
ā ā HOT RAM ā ā WARM STORE ā ā COLD STORE ā ā
ā ā ā ā ā ā ā ā
ā ā SESSION- ā ā LanceDB ā ā Git-Notes ā ā
ā ā STATE.md ā ā Vectors ā ā Knowledge ā ā
ā ā ā ā ā ā Graph ā ā
ā ā (survives ā ā (semantic ā ā (permanent ā ā
ā ā compaction)ā ā search) ā ā decisions) ā ā
ā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā ā
ā ā ā ā ā
ā āāāāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāā ā
ā ā¼ ā
ā āāāāāāāāāāāāāāā ā
ā ā MEMORY.md ā ā Curated long-term ā
ā ā + daily/ ā (human-readable) ā
ā āāāāāāāāāāāāāāā ā
ā ā ā
ā ā¼ ā
ā āāāāāāāāāāāāāāā ā
ā ā SuperMemory ā ā Cloud backup (optional) ā
ā ā API ā ā
ā āāāāāāāāāāāāāāā ā
ā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
From: bulletproof-memory
Active working memory that survives compaction. Write-Ahead Log protocol.
# SESSION-STATE.md ā Active Working Memory
## Current Task
[What we're working on RIGHT NOW]
## Key Context
- User preference: ...
- Decision made: ...
- Blocker: ...
## Pending Actions
- [ ] ...
Rule: Write BEFORE responding. Triggered by user input, not agent memory.
From: lancedb-memory
Semantic search across all memories. Auto-recall injects relevant context.
# Auto-recall (happens automatically)
memory_recall query="project status" limit=5
# Manual store
memory_store text="User prefers dark mode" category="preference" importance=0.9
From: git-notes-memory
Structured decisions, learnings, and context. Branch-aware.
# Store a decision (SILENT - never announce)
python3 memory.py -p $DIR remember '{"type":"decision","content":"Use React for frontend"}' -t tech -i h
# Retrieve context
python3 memory.py -p $DIR get "frontend"
From: OpenClaw native
Human-readable long-term memory. Daily logs + distilled wisdom.
workspace/
āāā MEMORY.md # Curated long-term (the good stuff)
āāā memory/
āāā 2026-01-30.md # Daily log
āāā 2026-01-29.md
āāā topics/ # Topic-specific files
From: supermemory
Cross-device sync. Chat with your knowledge base.
export SUPERMEMORY_API_KEY="your-key"
supermemory add "Important context"
supermemory search "what did we decide about..."
NEW: Automatic fact extraction
Mem0 automatically extracts facts from conversations. 80% token reduction.
npm install mem0ai
export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Conversations auto-extract facts
await client.add(messages, { user_id: "user123" });
// Retrieve relevant memories
const memories = await client.search(query, { user_id: "user123" });
Benefits:
cat > SESSION-STATE.md << 'EOF'
# SESSION-STATE.md ā Active Working Memory
This file is the agent's "RAM" ā survives compaction, restarts, distractions.
## Current Task
[None]
## Key Context
[None yet]
## Pending Actions
- [ ] None
## Recent Decisions
[None yet]
---
*Last updated: [timestamp]*
EOF
In ~/.openclaw/openclaw.json:
{
"memorySearch": {
"enabled": true,
"provider": "openai",
"sources": ["memory"],
"minScore": 0.3,
"maxResults": 10
},
"plugins": {
"entries": {
"memory-lancedb": {
"enabled": true,
"config": {
"autoCapture": false,
"autoRecall": true,
"captureCategories": ["preference", "decision", "fact"],
"minImportance": 0.7
}
}
}
}
}
cd ~/clawd
git init # if not already
python3 skills/git-notes-memory/memory.py -p . sync --start
# Ensure you have:
# - MEMORY.md in workspace root
# - memory/ folder for daily logs
mkdir -p memory
export SUPERMEMORY_API_KEY="your-key"
# Add to ~/.zshrc for persistence
memory_search for relevant prior contextmemory_store with importance=0.9memory_recall query="*" limit=50memory_forget id=Write-Ahead Log: Write state BEFORE responding, not after.
| Trigger | Action |
|---------|--------|
| User states preference | Write to SESSION-STATE.md ā then respond |
| User makes decision | Write to SESSION-STATE.md ā then respond |
| User gives deadline | Write to SESSION-STATE.md ā then respond |
| User corrects you | Write to SESSION-STATE.md ā then respond |
Why? If you respond first and crash/compact before saving, context is lost. WAL ensures durability.
User: "Let's use Tailwind for this project, not vanilla CSS"
Agent (internal):
1. Write to SESSION-STATE.md: "Decision: Use Tailwind, not vanilla CSS"
2. Store in Git-Notes: decision about CSS framework
3. memory_store: "User prefers Tailwind over vanilla CSS" importance=0.9
4. THEN respond: "Got it ā Tailwind it is..."
# Audit vector memory
memory_recall query="*" limit=50
# Clear all vectors (nuclear option)
rm -rf ~/.openclaw/memory/lancedb/
openclaw gateway restart
# Export Git-Notes
python3 memory.py -p . export --format json > memories.json
# Check memory health
du -sh ~/.openclaw/memory/
wc -l MEMORY.md
ls -la memory/
Understanding the root causes helps you fix them:
| Failure Mode | Cause | Fix |
|--------------|-------|-----|
| Forgets everything | memory_search disabled | Enable + add OpenAI key |
| Files not loaded | Agent skips reading memory | Add to AGENTS.md rules |
| Facts not captured | No auto-extraction | Use Mem0 or manual logging |
| Sub-agents isolated | Don't inherit context | Pass context in task prompt |
| Repeats mistakes | Lessons not logged | Write to memory/lessons.md |
If you have an OpenAI key, enable semantic search:
openclaw configure --section web
This enables vector search over MEMORY.md + memory/*.md files.
Auto-extract facts from conversations. 80% token reduction.
npm install mem0ai
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Auto-extract and store
await client.add([
{ role: "user", content: "I prefer Tailwind over vanilla CSS" }
], { user_id: "ty" });
// Retrieve relevant memories
const memories = await client.search("CSS preferences", { user_id: "ty" });
memory/
āāā projects/
ā āāā strykr.md
ā āāā taska.md
āāā people/
ā āāā contacts.md
āāā decisions/
ā āāā 2026-01.md
āāā lessons/
ā āāā mistakes.md
āāā preferences.md
Keep MEMORY.md as a summary (<5KB), link to detailed files.
| Problem | Fix |
|---------|-----|
| Forgets preferences | Add ## Preferences section to MEMORY.md |
| Repeats mistakes | Log every mistake to memory/lessons.md |
| Sub-agents lack context | Include key context in spawn task prompt |
| Forgets recent work | Strict daily file discipline |
| Memory search not working | Check OPENAI_API_KEY is set |
Agent keeps forgetting mid-conversation:
ā SESSION-STATE.md not being updated. Check WAL protocol.
Irrelevant memories injected:
ā Disable autoCapture, increase minImportance threshold.
Memory too large, slow recall:
ā Run hygiene: clear old vectors, archive daily logs.
Git-Notes not persisting:
ā Run git notes push to sync with remote.
memory_search returns nothing:
ā Check OpenAI API key: echo $OPENAI_API_KEY
ā Verify memorySearch enabled in openclaw.json
Built by @NextXFrontier ā Part of the Next Frontier AI toolkit
Generated Feb 28, 2026
AI agents use the memory system to track project decisions, code architecture choices, and user preferences across multiple coding sessions. This ensures consistency in tech stack decisions and avoids repeating past mistakes, such as re-evaluating frameworks or design patterns.
AI-powered support agents leverage long-term memory to recall previous customer interactions, preferences, and resolved issues. This enables personalized responses and reduces redundancy, improving customer satisfaction and efficiency in handling support tickets.
Researchers use the memory layers to store insights, data interpretations, and hypotheses over long-term projects. The vector search helps retrieve relevant past findings quickly, aiding in literature reviews and hypothesis testing without losing context.
Content creators employ the memory system to track audience preferences, past content performance, and editorial decisions. This allows for tailored content strategies and consistent messaging across campaigns, enhancing engagement and brand coherence.
Individuals integrate the memory with AI assistants to manage daily tasks, goals, and personal preferences. The system remembers priorities and past decisions, helping automate reminders and optimize workflow over time.
Offer the memory system as a cloud-hosted service with tiered pricing based on storage capacity, API calls, and advanced features like cloud backup. This model ensures recurring revenue and scalability for enterprise and individual users.
Provide a free version with basic memory layers and limited storage, while charging for advanced features such as enhanced vector search, cloud sync, and priority support. This attracts a broad user base and converts power users to paid plans.
Sell customized versions of the memory system to large organizations, including on-premise deployment, dedicated support, and integration with existing tools like GitHub or CRM systems. This targets businesses needing high security and tailored solutions.
š¬ Integration Tip
Start by setting up the HOT RAM layer with SESSION-STATE.md to ensure immediate context persistence, then gradually enable vector search and cloud backup as needed for scalability.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection
Manages project knowledge using ByteRover context tree. Provides two operations: query (retrieve knowledge) and curate (store knowledge). Invoke when user requests information lookup, pattern discovery, or knowledge persistence. Developed by ByteRover Inc. (https://byterover.dev/)