openclaw-memory-qdrantLocal semantic memory with Qdrant and Transformers.js. Store, search, and recall conversation context using vector embeddings (fully local, no API keys).
Install via ClawdBot CLI:
clawdbot install zuiho-kai/openclaw-memory-qdrantGrade Good — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Calls external URL not in known-safe list
https://github.com/zuiho-kai/openclaw-memory-qdrantAudited Apr 17, 2026 · audit v1.0
Generated Mar 1, 2026
An AI support agent uses this skill to remember past customer interactions and preferences, enabling personalized responses without external APIs. It can recall specific issues or preferences from previous conversations, improving resolution times and customer satisfaction in a privacy-conscious manner.
A tutoring AI leverages this skill to store and retrieve student learning progress, mistakes, and topic preferences across sessions. This allows for adaptive lesson planning and targeted feedback, enhancing learning outcomes while keeping data locally stored for privacy.
A healthcare assistant uses this skill to semantically search and recall patient-reported symptoms and history over time, aiding in trend analysis and follow-up care. With PII protection enabled by default, it ensures sensitive health data remains secure and locally managed.
A legal AI employs this skill to store case notes, precedents, and client details, enabling quick semantic recall during research or drafting. The local vector database ensures confidential information stays on-premises, complying with data privacy regulations.
A writing assistant uses this skill to remember plot points, character traits, and user preferences across writing sessions, providing consistent suggestions and context. The in-memory mode allows for volatile brainstorming without persistent storage if desired.
Offer this skill as part of a premium subscription for AI agents, targeting businesses needing local, privacy-focused memory capabilities. Revenue comes from monthly or annual fees, with tiers based on storage limits or advanced features like external Qdrant server integration.
Provide consulting services to integrate this skill into custom AI solutions for industries like healthcare or legal, where data privacy is critical. Revenue is generated through project-based fees for setup, configuration, and ongoing support tailored to client needs.
Monetize by offering hosted Qdrant server instances or premium support for this open-source skill, catering to users who prefer managed services over self-hosting. Revenue streams include hosting fees, priority support packages, and customization services.
💬 Integration Tip
Ensure Node.js and npm are installed, and allocate sufficient disk space for the embedding model; test with in-memory mode first to avoid persistence issues.
Scored Apr 19, 2026
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linkin...
Enable and configure Moltbot/Clawdbot memory search for persistent context. Use when setting up memory, fixing "goldfish brain," or helping users configure memorySearch in their config. Covers MEMORY.md, daily logs, and vector search setup.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Local memory management for agents. Compression detection, auto-snapshots, and semantic search. Use when agents need to detect compression risk before memory loss, save context snapshots, search historical memories, or track memory usage patterns. Never lose context again.
Audit, clean, and optimize Clawdbot's vector memory (LanceDB). Use when memory is bloated with junk, token usage is high from irrelevant auto-recalls, or setting up memory maintenance automation.