elite-longterm-memory-1-2-3Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vib...
Install via ClawdBot CLI:
clawdbot install itsjustFred/elite-longterm-memory-1-2-3The ultimate memory system for AI agents. Combines 6 proven approaches into one bulletproof architecture.
Never lose context. Never forget decisions. Never repeat mistakes.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā ELITE LONGTERM MEMORY ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā¤
ā ā
ā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā ā
ā ā HOT RAM ā ā WARM STORE ā ā COLD STORE ā ā
ā ā ā ā ā ā ā ā
ā ā SESSION- ā ā LanceDB ā ā Git-Notes ā ā
ā ā STATE.md ā ā Vectors ā ā Knowledge ā ā
ā ā ā ā ā ā Graph ā ā
ā ā (survives ā ā (semantic ā ā (permanent ā ā
ā ā compaction)ā ā search) ā ā decisions) ā ā
ā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā āāāāāāāāāāāāāāā ā
ā ā ā ā ā
ā āāāāāāāāāāāāāāāāāā¼āāāāāāāāāāāāāāāāā ā
ā ā¼ ā
ā āāāāāāāāāāāāāāā ā
ā ā MEMORY.md ā ā Curated long-term ā
ā ā + daily/ ā (human-readable) ā
ā āāāāāāāāāāāāāāā ā
ā ā ā
ā ā¼ ā
ā āāāāāāāāāāāāāāā ā
ā ā SuperMemory ā ā Cloud backup (optional) ā
ā ā API ā ā
ā āāāāāāāāāāāāāāā ā
ā ā
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
From: bulletproof-memory
Active working memory that survives compaction. Write-Ahead Log protocol.
# SESSION-STATE.md ā Active Working Memory
## Current Task
[What we're working on RIGHT NOW]
## Key Context
- User preference: ...
- Decision made: ...
- Blocker: ...
## Pending Actions
- [ ] ...
Rule: Write BEFORE responding. Triggered by user input, not agent memory.
From: lancedb-memory
Semantic search across all memories. Auto-recall injects relevant context.
# Auto-recall (happens automatically)
memory_recall query="project status" limit=5
# Manual store
memory_store text="User prefers dark mode" category="preference" importance=0.9
From: git-notes-memory
Structured decisions, learnings, and context. Branch-aware.
# Store a decision (SILENT - never announce)
python3 memory.py -p $DIR remember '{"type":"decision","content":"Use React for frontend"}' -t tech -i h
# Retrieve context
python3 memory.py -p $DIR get "frontend"
From: OpenClaw native
Human-readable long-term memory. Daily logs + distilled wisdom.
workspace/
āāā MEMORY.md # Curated long-term (the good stuff)
āāā memory/
āāā 2026-01-30.md # Daily log
āāā 2026-01-29.md
āāā topics/ # Topic-specific files
From: supermemory
Cross-device sync. Chat with your knowledge base.
export SUPERMEMORY_API_KEY="your-key"
supermemory add "Important context"
supermemory search "what did we decide about..."
NEW: Automatic fact extraction
Mem0 automatically extracts facts from conversations. 80% token reduction.
npm install mem0ai
export MEM0_API_KEY="your-key"
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Conversations auto-extract facts
await client.add(messages, { user_id: "user123" });
// Retrieve relevant memories
const memories = await client.search(query, { user_id: "user123" });
Benefits:
cat > SESSION-STATE.md << 'EOF'
# SESSION-STATE.md ā Active Working Memory
This file is the agent's "RAM" ā survives compaction, restarts, distractions.
## Current Task
[None]
## Key Context
[None yet]
## Pending Actions
- [ ] None
## Recent Decisions
[None yet]
---
*Last updated: [timestamp]*
EOF
In ~/.openclaw/openclaw.json:
{
"memorySearch": {
"enabled": true,
"provider": "openai",
"sources": ["memory"],
"minScore": 0.3,
"maxResults": 10
},
"plugins": {
"entries": {
"memory-lancedb": {
"enabled": true,
"config": {
"autoCapture": false,
"autoRecall": true,
"captureCategories": ["preference", "decision", "fact"],
"minImportance": 0.7
}
}
}
}
}
cd ~/clawd
git init # if not already
python3 skills/git-notes-memory/memory.py -p . sync --start
# Ensure you have:
# - MEMORY.md in workspace root
# - memory/ folder for daily logs
mkdir -p memory
export SUPERMEMORY_API_KEY="your-key"
# Add to ~/.zshrc for persistence
memory_search for relevant prior contextmemory_store with importance=0.9memory_recall query="*" limit=50memory_forget id=Write-Ahead Log: Write state BEFORE responding, not after.
| Trigger | Action |
|---------|--------|
| User states preference | Write to SESSION-STATE.md ā then respond |
| User makes decision | Write to SESSION-STATE.md ā then respond |
| User gives deadline | Write to SESSION-STATE.md ā then respond |
| User corrects you | Write to SESSION-STATE.md ā then respond |
Why? If you respond first and crash/compact before saving, context is lost. WAL ensures durability.
User: "Let's use Tailwind for this project, not vanilla CSS"
Agent (internal):
1. Write to SESSION-STATE.md: "Decision: Use Tailwind, not vanilla CSS"
2. Store in Git-Notes: decision about CSS framework
3. memory_store: "User prefers Tailwind over vanilla CSS" importance=0.9
4. THEN respond: "Got it ā Tailwind it is..."
# Audit vector memory
memory_recall query="*" limit=50
# Clear all vectors (nuclear option)
rm -rf ~/.openclaw/memory/lancedb/
openclaw gateway restart
# Export Git-Notes
python3 memory.py -p . export --format json > memories.json
# Check memory health
du -sh ~/.openclaw/memory/
wc -l MEMORY.md
ls -la memory/
Understanding the root causes helps you fix them:
| Failure Mode | Cause | Fix |
|--------------|-------|-----|
| Forgets everything | memory_search disabled | Enable + add OpenAI key |
| Files not loaded | Agent skips reading memory | Add to AGENTS.md rules |
| Facts not captured | No auto-extraction | Use Mem0 or manual logging |
| Sub-agents isolated | Don't inherit context | Pass context in task prompt |
| Repeats mistakes | Lessons not logged | Write to memory/lessons.md |
If you have an OpenAI key, enable semantic search:
openclaw configure --section web
This enables vector search over MEMORY.md + memory/*.md files.
Auto-extract facts from conversations. 80% token reduction.
npm install mem0ai
const { MemoryClient } = require('mem0ai');
const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY });
// Auto-extract and store
await client.add([
{ role: "user", content: "I prefer Tailwind over vanilla CSS" }
], { user_id: "ty" });
// Retrieve relevant memories
const memories = await client.search("CSS preferences", { user_id: "ty" });
memory/
āāā projects/
ā āāā strykr.md
ā āāā taska.md
āāā people/
ā āāā contacts.md
āāā decisions/
ā āāā 2026-01.md
āāā lessons/
ā āāā mistakes.md
āāā preferences.md
Keep MEMORY.md as a summary (<5KB), link to detailed files.
| Problem | Fix |
|---------|-----|
| Forgets preferences | Add ## Preferences section to MEMORY.md |
| Repeats mistakes | Log every mistake to memory/lessons.md |
| Sub-agents lack context | Include key context in spawn task prompt |
| Forgets recent work | Strict daily file discipline |
| Memory search not working | Check OPENAI_API_KEY is set |
Agent keeps forgetting mid-conversation:
ā SESSION-STATE.md not being updated. Check WAL protocol.
Irrelevant memories injected:
ā Disable autoCapture, increase minImportance threshold.
Memory too large, slow recall:
ā Run hygiene: clear old vectors, archive daily logs.
Git-Notes not persisting:
ā Run git notes push to sync with remote.
memory_search returns nothing:
ā Check OpenAI API key: echo $OPENAI_API_KEY
ā Verify memorySearch enabled in openclaw.json
Built by @NextXFrontier ā Part of the Next Frontier AI toolkit
Generated Mar 1, 2026
AI agents track project decisions, coding preferences, and technical debt across development sprints. This ensures consistent context when switching between tasks or team members, reducing onboarding time and maintaining project continuity.
AI-powered support agents use long-term memory to recall past interactions, user issues, and resolutions. This enables personalized responses and faster problem-solving, improving customer satisfaction and reducing repeat inquiries.
Content teams leverage memory layers to store brand guidelines, audience preferences, and campaign performance data. AI agents generate consistent, on-brand content and optimize strategies based on historical insights.
Researchers use the system to log hypotheses, data sources, and findings over long-term projects. AI agents assist by recalling relevant studies and patterns, accelerating literature reviews and experimental design.
Individuals track personal goals, learning progress, and daily reflections. AI agents provide tailored recommendations and reminders based on stored preferences and past achievements, enhancing self-improvement efforts.
Offer tiered monthly subscriptions for individuals, teams, and enterprises with features like cloud backup and advanced analytics. Revenue scales with user count and storage needs, targeting developers and businesses.
Provide a free basic version with limited memory layers and integrations, then charge for premium features such as Mem0 auto-extraction and SuperMemory cloud sync. This attracts users and converts them through value-added services.
Sell customized packages to large organizations with needs like on-premise deployment, enhanced security, and integration with existing tools. Revenue comes from licensing fees and ongoing support contracts.
š¬ Integration Tip
Start by setting up SESSION-STATE.md and LanceDB for basic memory, then gradually add layers like Git-Notes and Mem0 as needs grow to avoid overwhelming initial setup.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection