Logo
ClawHub Skills Lib
HomeCategoriesUse CasesTrendingBlog
HomeCategoriesUse CasesTrendingBlog
ClawHub Skills Lib
ClawHub Skills Lib

Browse 18,000+ community-built AI agent skills for OpenClaw. Updated daily from clawhub.ai.

Explore

  • Home
  • Trending
  • Use Cases
  • Blog

Categories

  • Development
  • AI & Agents
  • Productivity
  • Communication
  • Data & Research
  • Business
  • Platforms
  • Lifestyle
  • Education
  • Design

Use Cases

  • Security Auditing
  • Workflow Automation
  • Finance & Fintech
  • MCP Integration
  • Crypto Trading
  • Web3 & DeFi
  • Data Analysis
  • Social Media
  • 中文平台技能
  • All Use Cases →
© 2026 ClawHub Skills Lib. All rights reserved.Built with Next.js · Supabase · Prisma
Home/Blog/Ontology: The OpenClaw Skill That Gives Your AI Agent a Real Memory
skill-spotlightagent-memoryknowledge-graphclawhubopenclaw

Ontology: The OpenClaw Skill That Gives Your AI Agent a Real Memory

March 7, 2026·9 min read

Every AI agent has a memory problem.

Ask your agent "what did we decide about the database schema last Tuesday?" and it doesn't know. Mention that your teammate Sara is the project lead and it forgets by the next session. Build a multi-step workflow where one agent needs to know what another discovered — and you're manually passing context between them like a human relay race.

The standard fix is CLAUDE.md — a project memory file where you write things down and hope the agent reads it. It works, barely. But it's flat text. There are no relationships, no types, no constraints, and no way for multiple agents or skills to safely share structured state.

Ontology by @oswalpalash is a different approach entirely: a typed knowledge graph that runs locally, persists across sessions, and can be read by any skill or agent that knows the query syntax. With 89,000+ downloads and 206 stars, it's the second most-downloaded skill on ClawHub — and most people installing it have no idea how deep it goes.


The Problem With Flat Memory

Before getting into what ontology does, it's worth understanding why existing approaches fall short.

CLAUDE.md: Useful but Unstructured

Project memory files are great for natural language context — coding conventions, project goals, personal preferences. But they're append-only text blobs. You can't query them. You can't enforce that a "Task" always has a status. You can't say "give me all open tasks linked to the authentication project." You definitely can't have two different agents write to the same memory without stepping on each other.

Vector Memory: Semantically Rich, Structurally Blind

Systems like Mem0, Zep, and RAG pipelines store memories as embeddings. They're excellent at "find me things that sound like X." But as one analysis put it: vector memory knows a user likes coffee, while graph memory knows which shop, ordered on which day, mentioned while discussing their morning routine. Vector stores retrieve similar past exchanges but treat each memory independently — the connections between facts are lost.

A concrete illustration: imagine an agent managing a patient's health records. It stores patient_vitals, medication_history, cardiac_risk_factors, and sleep_irregularities as separate memory entries. A second agent independently logs related cardiac biomarker research. The connection between the patient's risk factors and the research? Gone. Each agent is working in isolation, querying its own flat store, missing the relationship that could matter most.

The Graph Difference

A knowledge graph stores entities and the relationships between them. When you add a new node, it potentially connects to every existing node — the knowledge base grows combinatorially rather than linearly. You can traverse: "give me everything related to Project Atlas" and surface the team members, related services, open tasks, and deployment dates, all from a single query.


What Ontology Actually Does

Ontology provides a typed vocabulary and constraint system for representing knowledge as a verifiable graph. Everything is an entity with:

  • A type (Person, Task, Project, Document, etc.)
  • Properties (key-value attributes validated against a schema)
  • Relations to other entities (typed links with optional properties)
  • Timestamps (created, updated — temporal awareness built in)

Every mutation is validated against type constraints before it commits. This isn't just nice-to-have — it means you can trust the graph. An agent can't accidentally write a Task without a title or create a circular dependency in a project structure.

The graph persists at memory/ontology/graph.jsonl — plain JSONL on disk, readable by any tool, editable by hand if needed.


Entity Types

Ontology ships with a practical set of pre-defined types covering the most common agent workflows:

People & Organizations

  • Person — individual humans, with name, role, contact info
  • Organization — companies, teams, departments

Work Management

  • Project — a container for goals, tasks, and deliverables
  • Task — atomic work unit (requires title and status: open / in_progress / blocked / done)
  • Goal — higher-level objective that projects or tasks contribute toward

Time & Information

  • Event — time-bound occurrences with start/end
  • Document — files, notes, specs, reports
  • Message, Thread, Note — communication artifacts

Resources

  • Account — authenticated services or credentials (stored as references, not raw values)
  • Device — physical or virtual machines
  • Credential — access tokens and keys (safe reference system)

Meta

  • Action — logged operations, audit trail entries
  • Policy — rules and constraints that govern behavior

You can define custom types in memory/ontology/schema.yaml with your own required fields, enum constraints, and relation cardinality rules.


Commands

Ontology exposes five core operations via a Python CLI:

Create an Entity

python3 scripts/ontology.py create \
  --type Task \
  --props '{"title": "Refactor auth module", "status": "open", "priority": "high"}'
# Returns: { "id": "task_abc123", "type": "Task", ... }

Query Entities

# All open tasks
python3 scripts/ontology.py query \
  --type Task \
  --where '{"status": "open"}'
 
# Find a person by name
python3 scripts/ontology.py query \
  --type Person \
  --where '{"name": "Sara"}'

Relate Two Entities

# Link a task to a project
python3 scripts/ontology.py relate \
  --from task_abc123 \
  --rel BELONGS_TO \
  --to project_xyz789
 
# Assign a person to a task
python3 scripts/ontology.py relate \
  --from person_sara \
  --rel ASSIGNED_TO \
  --to task_abc123

Retrieve a Specific Entity

python3 scripts/ontology.py get --id task_abc123

Validate the Graph

python3 scripts/ontology.py validate
# Checks all constraints, reports violations

The Constraint System

The schema is defined in memory/ontology/schema.yaml. Here's what a Task constraint looks like:

Task:
  required:
    - title
    - status
  properties:
    status:
      enum: [open, in_progress, blocked, done]
    priority:
      enum: [low, medium, high, critical]
  relations:
    BELONGS_TO:
      target: Project
      cardinality: many_to_one
    ASSIGNED_TO:
      target: Person
      cardinality: many_to_many
    BLOCKS:
      acyclic: true  # prevents circular dependencies

The acyclic: true constraint is particularly useful: it means an agent can't accidentally create a cycle where Task A blocks Task B which blocks Task A. The validator catches it before the mutation commits.


Real Use Cases

1. Persistent Project Context Across Sessions

The most immediate use case: your agent knows who's on your team, what's in progress, and what's blocked — every session, without you re-explaining.

# Session 1: set up your project context
python3 scripts/ontology.py create --type Person \
  --props '{"name": "Sara", "role": "tech lead", "email": "sara@example.com"}'
 
python3 scripts/ontology.py create --type Project \
  --props '{"name": "Auth Refactor", "status": "active", "deadline": "2026-04-01"}'
 
python3 scripts/ontology.py relate \
  --from person_sara --rel OWNS --to project_auth

Next session, ask your agent "what's Sara working on?" and it can query the graph rather than asking you to repeat yourself.

2. Cross-Skill State Sharing

This is where ontology becomes genuinely powerful. Multiple skills can read from and write to the same graph, using shared entity IDs as coordination points.

A research skill finds a relevant document and creates a Document entity. A summarization skill reads that entity by ID, generates a summary, and stores it as a property. A notification skill queries for Documents updated today and sends alerts. No shared global state. No race conditions. Each skill queries what it needs.

3. Multi-Step Action Planning as Graph Transformations

Complex workflows — deployments, onboarding sequences, investigation runbooks — can be modeled as graph transformations. Each step creates or relates entities. The graph's current state represents the workflow's progress. Any agent can inspect it and continue from where the previous one left off.

4. Dependency Tracking

# Model service dependencies
python3 scripts/ontology.py create --type Device \
  --props '{"name": "api-gateway", "type": "service"}'
 
python3 scripts/ontology.py create --type Device \
  --props '{"name": "auth-service", "type": "service"}'
 
python3 scripts/ontology.py relate \
  --from api-gateway --rel DEPENDS_ON --to auth-service
 
# Now ask: "what would break if auth-service goes down?"
# The agent traverses the DEPENDS_ON graph instead of guessing

Ontology vs. self-improving-agent

These two skills are often installed together, and they serve different but complementary purposes:

| | ontology | self-improving-agent | |--|--|--| | What it stores | Structured entities and relationships | Learnings, corrections, error patterns | | Format | Typed graph (JSONL + schema) | Markdown files | | Primary use | Shared knowledge state, entity tracking | Agent self-improvement across sessions | | Queryable? | Yes, by type/property/relation | No (flat text) | | Multi-agent? | Yes, safe shared reads/writes | No | | Best for | "Who owns what, what's related to what" | "What mistakes to avoid, what patterns work" |

Use both: self-improving-agent improves how your agent works, ontology structures what it knows.


Installation

clawhub install ontology

After installation, verify the schema and graph files are initialized:

ls memory/ontology/
# graph.jsonl  schema.yaml

Run the validator on a fresh install to confirm everything is healthy:

python3 scripts/ontology.py validate
# All constraints satisfied.

Why 89,000 Downloads?

The honest answer: most agents accumulate knowledge that evaporates. Session to session, between tasks, across a team — context is lost and has to be rebuilt from scratch. Ontology is the first skill that seriously attacks this problem with a structured, queryable, constraint-validated graph that any skill can participate in.

It's not a silver bullet — you have to model your entities, write your schema, and make sure agents actually query the graph instead of asking the user. But once that infrastructure is in place, the improvement in agent reliability and continuity is significant enough that 89,000 people have made it a permanent part of their setup.

If you haven't tried it, start small: create a few Person and Project entities, run a few queries, and see how it changes the way your agent talks about your work.

clawhub install ontology

Before installing, check the skill's current security status and community reports at clawhub.ai/skills.

← Back to Blog