ollama-memory-embeddingsConfigure OpenClaw memory search to use Ollama as the embeddings server (OpenAI-compatible /v1/embeddings) instead of the built-in node-llama-cpp local GGUF loading. Includes interactive model selection and optional import of an existing local embedding GGUF into Ollama.
Install via ClawdBot CLI:
clawdbot install vidarbrekke/ollama-memory-embeddingsGrade Fair — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Sends data to undocumented external endpoint (potential exfiltration)
POST → http://127.0.0.1:11434/v1/embeddingsCalls external URL not in known-safe list
http://127.0.0.1:11434/v1/AI Analysis
The skill only configures the local OpenClaw instance to use a local Ollama server (127.0.0.1:11434) for generating embeddings, which is consistent with its stated purpose and does not send data to external networks. The configuration changes are transparent, user-controlled, and there is no evidence of credential harvesting, obfuscation, or hidden malicious instructions.
Audited Apr 17, 2026 · audit v1.0
Generated Mar 1, 2026
A research team using OpenClaw for AI development wants to switch from built-in GGUF embeddings to Ollama's embedding models for better performance and model variety. They need to maintain existing memory search functionality while upgrading to more advanced embedding models like mxbai-embed-large for higher quality research document retrieval.
A company migrating their internal knowledge base to OpenClaw needs to configure memory search with specific embedding models that match their existing infrastructure. They want to use Ollama's OpenAI-compatible endpoint for easier integration with their existing monitoring and deployment systems while optionally importing their pre-trained embedding models.
Individual developers using OpenClaw for coding assistance want to switch to faster embedding models like all-minilm for quicker memory search responses. They need a simple way to configure their local setup without disrupting their existing chat/completions functionality while ensuring config drift doesn't break their workflow.
An educational platform using OpenClaw for student assistance needs to configure memory search with embedding models optimized for educational content. They want to use Ollama's embedding server for better scalability and the ability to switch between different embedding models based on subject matter requirements.
A customer support team automating responses with OpenClaw needs to improve their knowledge base retrieval accuracy. They want to upgrade from default embeddings to higher quality models like nomic-embed-text while maintaining the ability to reindex existing memory vectors and monitor configuration health automatically.
Consultants help organizations configure and optimize their OpenClaw memory search with Ollama embeddings. Services include model selection guidance, performance tuning, and ongoing configuration management using the skill's enforcement and watchdog features for enterprise reliability.
Providers offer fully managed OpenClaw deployments with pre-configured Ollama memory embeddings. This includes automated installation, model updates, drift monitoring, and 24/7 support using the skill's verification and auto-healing capabilities for hands-off operation.
Platforms offer specialized embedding models optimized for different industries that can be easily integrated into OpenClaw via Ollama. The skill's model selection and import capabilities create a distribution channel for model developers to reach OpenClaw users.
💬 Integration Tip
Always run the verification script after installation and consider enabling the watchdog for production environments to automatically detect and fix configuration drift.
Scored Apr 19, 2026
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linkin...
Enable and configure Moltbot/Clawdbot memory search for persistent context. Use when setting up memory, fixing "goldfish brain," or helping users configure memorySearch in their config. Covers MEMORY.md, daily logs, and vector search setup.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Local memory management for agents. Compression detection, auto-snapshots, and semantic search. Use when agents need to detect compression risk before memory loss, save context snapshots, search historical memories, or track memory usage patterns. Never lose context again.
Audit, clean, and optimize Clawdbot's vector memory (LanceDB). Use when memory is bloated with junk, token usage is high from irrelevant auto-recalls, or setting up memory maintenance automation.