openclaw-universal-memoryGeneric Postgres and pgvector memory layer for connector-agnostic data ingestion, incremental sync, and searchable chunk storage with cursor history.
Install via ClawdBot CLI:
clawdbot install marcosathanasoulis/openclaw-universal-memoryThis skill provides a generic memory layer for heterogeneous data:
cursor per connector/account).vector extension.pip install -e ..pip install "psycopg[binary]>=3.2"DATABASE_DSN by default).SELECT/INSERT/UPDATE/DELETE on um_* tables only).Store DB credentials once (recommended):
python skills/openclaw-universal-memory/scripts/run_memory.py \
--action configure-dsn
Initialize schema:
python skills/openclaw-universal-memory/scripts/run_memory.py \
--action init-schema \
--dsn-env DATABASE_DSN
Ingest JSON/NDJSON:
python skills/openclaw-universal-memory/scripts/run_memory.py \
--action ingest-json \
--dsn-env DATABASE_DSN \
--source gmail \
--account marcos@athanasoulis.net \
--entity-type email \
--input /path/to/records.ndjson
Ingest from built-in connectors:
python skills/openclaw-universal-memory/scripts/run_memory.py \
--action ingest-connector \
--connector google \
--account you@example.com \
--dsn-env DATABASE_DSN \
--limit 300
Validate connector auth/config before ingest:
python skills/openclaw-universal-memory/scripts/run_memory.py \
--action validate-connector \
--connector google \
--account you@example.com \
--dsn-env DATABASE_DSN \
--limit 1
Search:
python skills/openclaw-universal-memory/scripts/run_memory.py \
--action search \
--dsn-env DATABASE_DSN \
--query "Deryk" \
--limit 20
Recent ingest history:
python skills/openclaw-universal-memory/scripts/run_memory.py \
--action events \
--dsn-env DATABASE_DSN \
--limit 20
Doctor check:
python skills/openclaw-universal-memory/scripts/run_memory.py \
--action doctor
Scheduling reference:
docs/SCHEDULING.md (cron examples, 15-minute default, connector toggles)A connector returns normalized records + next cursor:
external_identity_typetitlebody_textraw_jsonmeta_jsonnext_cursorThis keeps ingestion generic and supports arbitrary source systems.
Starter connector templates:
src/openclaw_memory/connectors/templates.pyStep-by-step setup guide (Gmail/Slack/Asana/iMessage):
docs/CONNECTOR_SETUP_WALKTHROUGH.mdWe welcome connector contributions via PR.
See docs/CONNECTOR_CONTRIBUTING.md for required contract, tests, and setup instructions.
Generated Mar 1, 2026
A large corporation uses this skill to ingest data from various internal systems like email, Slack, and project management tools into a unified Postgres schema. This enables building a centralized RAG-ready knowledge base for employees to search across all platforms, improving information retrieval and decision-making.
A customer support team integrates this skill to ingest support tickets, chat logs, and email interactions into pgvector storage. This allows AI agents to quickly search historical data for similar issues, providing faster and more accurate responses to customer inquiries.
A law firm employs this skill to normalize and store legal documents, case files, and correspondence from multiple sources. The incremental cursor history ensures efficient updates, while vector search helps lawyers quickly find relevant precedents and information for cases.
A healthcare provider uses this skill to ingest patient records, lab results, and medical notes from disparate systems into a single schema. This facilitates secure, searchable storage for research and compliance purposes, with connectors tailored to handle sensitive data.
A university research team leverages this skill to aggregate data from academic databases, journals, and internal repositories. The connector-agnostic approach allows normalization of diverse formats, enabling efficient literature reviews and knowledge discovery through vector search.
Offer this skill as a cloud-based service with managed Postgres and pre-built connectors for popular platforms. Charge monthly or annual subscriptions based on data volume and number of connectors, targeting businesses needing scalable memory solutions without infrastructure management.
Provide consulting services to help organizations implement and customize this skill, including developing bespoke connectors and integrating with existing systems. Revenue comes from project-based fees and ongoing support contracts for tailored memory solutions.
Distribute the core skill as open-source under Apache 2.0, while offering enterprise licenses for advanced features, security audits, and premium support. Monetize through licensing fees and dedicated assistance for large-scale deployments in regulated industries.
💬 Integration Tip
Ensure Postgres with the vector extension is properly configured and use environment variables for DSN to maintain security; start with built-in connectors like Google for testing before developing custom adapters.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection