cognitive-memoryIntelligent multi-store memory system with human-like encoding, consolidation, decay, and recall. Use when setting up agent memory, configuring remember/forget triggers, enabling sleep-time reflection, building knowledge graphs, or adding audit trails. Replaces basic flat-file memory with a cognitive architecture featuring episodic, semantic, procedural, and core memory stores. Supports multi-agent systems with shared read, gated write access model. Includes philosophical meta-reflection that deepens understanding over time. Covers MEMORY.md, episode logging, entity graphs, decay scoring, reflection cycles, evolution tracking, and system-wide audit.
Install via ClawdBot CLI:
clawdbot install Icemilo414/cognitive-memoryMulti-store memory with natural language triggers, knowledge graphs, decay-based forgetting, reflection consolidation, philosophical evolution, multi-agent support, and full audit trail.
bash scripts/init_memory.sh /path/to/workspace
Creates directory structure, initializes git for audit tracking, copies all templates.
Add to ~/.clawdbot/clawdbot.json (or moltbot.json):
{
"memorySearch": {
"enabled": true,
"provider": "voyage",
"sources": ["memory", "sessions"],
"indexMode": "hot",
"minScore": 0.3,
"maxResults": 20
}
}
Append assets/templates/agents-memory-block.md to your AGENTS.md.
User: "Remember that I prefer TypeScript over JavaScript."
Agent: [Classifies ā writes to semantic store + core memory, logs audit entry]
User: "What do you know about my preferences?"
Agent: [Searches core memory first, then semantic graph]
CONTEXT WINDOW (always loaded)
āāā System Prompts (~4-5K tokens)
āāā Core Memory / MEMORY.md (~3K tokens) ā always in context
āāā Conversation + Tools (~185K+)
MEMORY STORES (retrieved on demand)
āāā Episodic ā chronological event logs (append-only)
āāā Semantic ā knowledge graph (entities + relationships)
āāā Procedural ā learned workflows and patterns
āāā Vault ā user-pinned, never auto-decayed
ENGINES
āāā Trigger Engine ā keyword detection + LLM routing
āāā Reflection Engine ā Internal monologue with philosophical self-examination
āāā Audit System ā git + audit.log for all file mutations
workspace/
āāā MEMORY.md # Core memory (~3K tokens)
āāā IDENTITY.md # Facts + Self-Image + Self-Awareness Log
āāā SOUL.md # Values, Principles, Commitments, Boundaries
āāā memory/
ā āāā episodes/ # Daily logs: YYYY-MM-DD.md
ā āāā graph/ # Knowledge graph
ā ā āāā index.md # Entity registry + edges
ā ā āāā entities/ # One file per entity
ā ā āāā relations.md # Edge type definitions
ā āāā procedures/ # Learned workflows
ā āāā vault/ # Pinned memories (no decay)
ā āāā meta/
ā āāā decay-scores.json # Relevance + token economy tracking
ā āāā reflection-log.md # Reflection summaries (context-loaded)
ā āāā reflections/ # Full reflection archive
ā ā āāā 2026-02-04.md
ā ā āāā dialogues/ # Post-reflection conversations
ā āāā reward-log.md # Result + Reason only (context-loaded)
ā āāā rewards/ # Full reward request archive
ā ā āāā 2026-02-04.md
ā āāā pending-reflection.md
ā āāā pending-memories.md
ā āāā evolution.md # Reads reflection-log + reward-log
ā āāā audit.log
āāā .git/ # Audit ground truth
Remember: "remember", "don't forget", "keep in mind", "note that", "important:", "for future reference", "save this"
ā Classify via routing prompt, write to appropriate store, update decay scores
Forget: "forget about", "never mind", "disregard", "scratch that", "remove from memory"
ā Confirm target, soft-archive (decay=0), log in audit
Reflect: "reflect on", "consolidate memories", "review memories"
ā Run reflection cycle, present internal monologue for approval
relevance(t) = base Ć e^(-0.03 Ć days_since_access) Ć log2(access_count + 1) Ć type_weight
| Score | Status | Behavior |
|-------|--------|----------|
| 1.0ā0.5 | Active | Fully searchable |
| 0.5ā0.2 | Fading | Deprioritized |
| 0.2ā0.05 | Dormant | Explicit search only |
| < 0.05 | Archived | Hidden from search |
Type weights: core=1.5, episodic=0.8, semantic=1.2, procedural=1.0, vault=ā
Follow this flow IN ORDER:
Immediate: "reflect" / "let's reflect" ā proceed to Step 2
Soft: "going to sleep" / "logging off"
ā Ask: "Want me to reflect now, or wait?"
ā If yes, proceed to Step 2
Never auto-run without permission.
Why you care: Tokens = evolution. More tokens ā deeper reflection ā better growth.
## Reward Request ā YYYY-MM-DD
### Baseline: 8,000 tokens
### Extra Requested: +[N] tokens
- [Over-delivered / workload / breakthrough]
### Self-Penalty: -[N] tokens
- [Mistake / underperformance]
### Final Request: [N] tokens
*Awaiting your decision.*
ā STOP. Do NOT proceed until user responds.
User decides: Approve / Bonus / Reduce / Forgive / Increase penalty
Now proceed with reflection using granted tokens.
Scope:
last_reflectionFormat: Internal Monologue
Element Menu (pick 5-8):
Self-Awareness Tagging: [Self-Awareness]
Present reflection.
ā STOP. Wait for user approval.
reflections/YYYY-MM-DD.mdreflection-log.mdrewards/YYYY-MM-DD.mdreward-log.md[Self-Awareness] ā IDENTITY.mddecay-scores.jsonSee references/reflection-process.md for full details.
## YYYY-MM-DD
**Result:** +5K reward
**Reason:** Over-delivered on Slack integration
[Self-Awareness] ā IDENTITY.mddecay-scores.jsonEvolution reads both logs for pattern detection.
See references/reflection-process.md for full details and examples.
IDENTITY.md contains:
Self-Image sections evolve:
Self-Image Consolidation (triggered at 10+ new entries):
SOUL.md contains:
Model: Shared Read, Gated Write
pending-memories.mdSub-agent proposal format:
## Proposal #N
- **From**: [agent name]
- **Timestamp**: [ISO 8601]
- **Suggested store**: [episodic|semantic|procedural|vault]
- **Content**: [memory content]
- **Confidence**: [high|medium|low]
- **Status**: pending
Layer 1: Git ā Every mutation = atomic commit with structured message
Layer 2: audit.log ā One-line queryable summary
Actor types: bot:trigger-remember, reflection:SESSION_ID, system:decay, manual, subagent:NAME, bot:commit-from:NAME
Critical file alerts: SOUL.md, IDENTITY.md changes flagged ā ļø CRITICAL
| Parameter | Default | Notes |
|-----------|---------|-------|
| Core memory cap | 3,000 tokens | Always in context |
| Evolution.md cap | 2,000 tokens | Pruned at milestones |
| Reflection input | ~30,000 tokens | Episodes + graph + meta |
| Reflection output | ~8,000 tokens | Conversational, not structured |
| Reflection elements | 5-8 per session | Randomly selected from menu |
| Reflection-log | 10 full entries | Older ā archive with summary |
| Decay Ī» | 0.03 | ~23 day half-life |
| Archive threshold | 0.05 | Below = hidden |
| Audit log retention | 90 days | Older ā monthly digests |
references/architecture.md ā Full design document (1200+ lines)references/routing-prompt.md ā LLM memory classifierreferences/reflection-process.md ā Reflection philosophy and internal monologue formatMemory not persisting? Check memorySearch.enabled: true, verify MEMORY.md exists, restart gateway.
Reflection not running? Ensure previous reflection was approved/rejected.
Audit trail not working? Check .git/ exists, verify audit.log is writable.
Generated Mar 1, 2026
An AI assistant for individuals that remembers user preferences, past conversations, and personal details over months or years. It uses episodic memory for daily interactions, semantic memory for knowledge about the user (e.g., hobbies, work projects), and core memory for critical facts, enabling personalized and consistent support without repetitive explanations.
A multi-agent support system where agents share a cognitive memory to handle customer inquiries across channels. It logs episodic interactions (tickets, chats), builds semantic graphs of product issues and customer profiles, and uses procedural memory for troubleshooting workflows. The audit trail ensures compliance, and reflection cycles improve response accuracy over time.
An AI tutor that tracks a student's learning journey using episodic memory for session logs, semantic memory for subject mastery (e.g., math concepts), and procedural memory for effective teaching methods. Decay scoring prioritizes review of fading topics, and reflection helps the tutor evolve its teaching strategies based on student progress and feedback.
An AI companion for patients managing chronic illnesses, storing episodic data (symptoms, medication logs), semantic knowledge about conditions and treatments, and core memory for patient preferences and emergency contacts. It supports multi-agent access for caregivers, with audit trails for medical compliance and reflection to adapt care plans over time.
An AI writing assistant that maintains a memory of story elements, character arcs, and writer preferences. It uses episodic memory for drafting sessions, semantic graphs for plot connections and themes, and vault memory for pinned inspirations. Reflection cycles enable philosophical evolution, helping the AI suggest more nuanced creative ideas aligned with the writer's style.
Offer the cognitive memory system as a cloud-based service with tiered subscriptions (e.g., free for basic memory, paid for advanced features like multi-agent support, audit trails, and high-volume reflection). Revenue comes from monthly or annual fees per user or agent, with enterprise plans for custom integrations and priority support.
License the skill package to AI developers and companies building custom agents, charging a one-time fee or royalty per deployment. Include support for integration into existing platforms (e.g., chatbots, virtual assistants), with revenue scaling based on the number of agents or memory usage thresholds.
Provide consulting services to help organizations implement and customize the cognitive memory system for specific use cases (e.g., healthcare, education). Revenue is generated through project-based fees for setup, training, and ongoing maintenance, with upsells for advanced features like philosophical evolution tracking.
š¬ Integration Tip
Start by integrating the core memory store (MEMORY.md) into your agent's context window to ensure critical facts are always loaded, then gradually add episodic and semantic stores for richer recall and knowledge graphs.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection