lily-memoryPersistent memory plugin for OpenClaw agents. Hybrid SQLite FTS5 keyword + Ollama vector semantic search with auto-capture, auto-recall, stuck-detection, and...
Install via ClawdBot CLI:
clawdbot install kevinodell/lily-memoryPersistent memory plugin for OpenClaw agents. Gives your agent long-term memory that survives session resets, compaction, and restarts.
fetch)better-sqlite3 npm package (installed via npm install)nomic-embed-text model for semantic searchopenclaw.json:{
"plugins": {
"slots": { "memory": "lily-memory" },
"entries": {
"lily-memory": {
"enabled": true,
"config": {
"dbPath": "~/.openclaw/memory/decisions.db",
"entities": ["config", "system"]
}
}
}
}
}
openclaw gateway restart| Tool | Description |
|------|-------------|
| memory_search | FTS5 keyword search across all facts |
| memory_entity | Look up all facts for a specific entity |
| memory_store | Save a fact to persistent memory |
| memory_semantic_search | Vector similarity search via Ollama |
| memory_add_entity | Register a new entity at runtime |
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| dbPath | string | ~/.openclaw/memory/decisions.db | SQLite database path |
| autoRecall | boolean | true | Inject memories before each turn |
| autoCapture | boolean | true | Extract facts from responses |
| maxRecallResults | number | 10 | Max memories per turn |
| maxCapturePerTurn | number | 5 | Max facts per response |
| stuckDetection | boolean | true | Topic repetition detection |
| vectorSearch | boolean | true | Ollama semantic search |
| ollamaUrl | string | http://localhost:11434 | Ollama endpoint |
| embeddingModel | string | nomic-embed-text | Embedding model |
| consolidation | boolean | true | Dedup on startup |
| vectorSimilarityThreshold | number | 0.5 | Min cosine similarity |
| entities | array | [] | Additional entity names |
Recall flow: Extract keywords from message -> FTS5 + vector search -> merge and deduplicate -> inject as context
Capture flow: Regex scan for entity: key = value patterns -> validate entity against allowlist -> store to SQLite -> async embed via Ollama
Stuck detection: Track top 5 content words per response -> Jaccard similarity -> if 3+ consecutive >60% overlap, inject Reflexion nudge
MIT
Generated Mar 1, 2026
Integrate Lily Memory into a customer support chatbot to maintain persistent memory of user issues and resolutions across sessions. This enables the agent to recall past interactions, reducing repetition and providing personalized support, especially for recurring customers.
Use the plugin in an educational AI tutor to capture and recall student progress, preferences, and learning gaps over time. This allows for adaptive lesson planning and targeted feedback, improving engagement and outcomes in online education platforms.
Deploy Lily Memory in a healthcare chatbot to track patient symptoms, medication adherence, and appointment histories persistently. It helps provide consistent care recommendations and flag anomalies, supporting chronic disease management in telehealth services.
Implement the plugin in a shopping assistant to store user preferences, past purchases, and browsing behavior. This enables hybrid search for personalized product suggestions and trend analysis, enhancing customer retention and sales conversion.
Integrate the memory plugin into a project management AI to auto-capture task updates, decisions, and team discussions. It aids in recalling project context and detecting repetitive issues, streamlining collaboration and reducing miscommunication in remote teams.
Offer Lily Memory as a cloud-based service with tiered pricing based on memory storage capacity and search features. Revenue comes from monthly subscriptions, targeting businesses needing scalable, persistent AI memory without infrastructure management.
Sell perpetual licenses or annual contracts for on-premises deployment, including customization and support services. This model appeals to large organizations in regulated industries like healthcare or finance that require data control and compliance.
Provide a free version with basic keyword search and limited storage, while charging for advanced features like vector semantic search, auto-capture, and stuck detection. This attracts individual developers and small teams, converting them to paid plans as needs grow.
💬 Integration Tip
Ensure Node.js and better-sqlite3 are installed, and configure Ollama for vector search to maximize functionality; start with default settings and adjust entities and thresholds based on use case.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection