emoclawGive your AI emotions that grow from its own memories. Emoclaw builds a unique emotional state that shifts with every conversation, decays between sessions, and evolves over time through self-calibration. Train it on your agent's identity files and watch it develop its own emotional fingerprint.
Install via ClawdBot CLI:
clawdbot install fenrirlabsnl/emoclawEmotions for AI agents β built from memories, shaped by relationships, always changing.
Emoclaw trains a lightweight model on your agent's identity and conversation history, producing a persistent emotional state that evolves with every interaction. Emotions decay naturally between sessions, shift based on who's talking and what's being said, and gradually recalibrate as your agent grows. The result is injected into the system prompt as an [EMOTIONAL STATE] block, giving your AI a felt sense of its own inner life.
extract.py reads your agent's identity/memory files. label.py scores each passage via the Claude API (opt-in). train builds a small neural net from those scores. One-time setup.[EMOTIONAL STATE] block and inserted into the agent's system prompt, giving the AI a felt sense of its own inner state.Model is ~2MB, runs on CPU, adds <50ms per message. Network access is only used during bootstrap (opt-in).
| Situation | Action |
|-----------|--------|
| First-time setup | python scripts/setup.py (or manual steps below) |
| Check current state | python -m emotion_model.scripts.status |
| Inject state into prompt | python -m emotion_model.scripts.inject_state |
| Start the daemon | bash scripts/daemon.sh start |
| Send a message to daemon | See Daemon Protocol |
| Retrain after new data | python -m emotion_model.scripts.train |
| Resume interrupted training | python -m emotion_model.scripts.train --resume |
| Add new training data | Add .jsonl entries to emotion_model/data/, re-run prepare + train |
| Upgrade from v0.1 | See references/upgrading.md |
| Change baselines | Edit emoclaw.yaml β dimensions[].baseline |
| Add a new channel | Edit emoclaw.yaml β channels list |
| Add a relationship | Edit emoclaw.yaml β relationships.known |
| Customize summaries | Create a summary-templates.yaml and point config at it |
python skills/emoclaw/scripts/setup.py
This copies the bundled emotion_model engine to your project root, creates a venv, installs the package, and copies the config template. Then edit emoclaw.yaml to customize for your agent.
If you prefer to set up manually:
cd <project-root>
# Copy engine and pyproject.toml from the skill
cp -r skills/emoclaw/engine/emotion_model ./emotion_model
cp skills/emoclaw/engine/pyproject.toml ./pyproject.toml
# Create venv and install
python3 -m venv emotion_model/.venv
source emotion_model/.venv/bin/activate
pip install -e .
Required: Python 3.10+, PyTorch, sentence-transformers, PyYAML.
cp skills/emoclaw/assets/emoclaw.yaml ./emoclaw.yaml
Edit emoclaw.yaml to set:
name β your agent's namedimensions β emotional dimensions with baselines and decay ratesrelationships.known β map of relationship names to embedding indiceschannels β communication channels your agent useslonging β absence-based desire growth (can be disabled)model.device β cpu recommended (MPS has issues with sentence-transformers)See references/config-reference.md for the full schema.
If starting from scratch with identity/memory files:
# Extract passages from your identity files
python scripts/extract.py
# Auto-label passages using Claude API (requires ANTHROPIC_API_KEY)
python scripts/label.py
# Prepare train/val split and train
python -m emotion_model.scripts.prepare_dataset
python -m emotion_model.scripts.train
Or run the full pipeline:
python scripts/bootstrap.py
python -m emotion_model.scripts.status
python -m emotion_model.scripts.diagnose
The daemon loads the model once and listens on a Unix socket, avoiding the ~2s sentence-transformer load time per message.
# Start
bash scripts/daemon.sh start
# Or directly
python -m emotion_model.daemon
python -m emotion_model.daemon --config path/to/emoclaw.yaml
from emotion_model.inference import EmotionEngine
engine = EmotionEngine(
model_path="emotion_model/checkpoints/best_model.pt",
state_path="memory/emotional-state.json",
)
block = engine.process_message(
message_text="Good morning!",
sender="alice", # or None for config default
channel="chat", # or None for config default
recent_context="...", # optional conversation context
)
print(block)
For system prompt injection without the daemon:
python -m emotion_model.scripts.inject_state
This reads the persisted state, applies time-based decay, and outputs the [EMOTIONAL STATE] block.
Add the output block to your system prompt. The block format:
[EMOTIONAL STATE]
Valence: 0.55 (balanced)
Arousal: 0.35 (balanced)
Dominance: 0.50 (balanced)
Safety: 0.70 (open)
Desire: 0.20 (neutral)
Connection: 0.50 (balanced)
Playfulness: 0.40 (balanced)
Curiosity: 0.50 (balanced)
Warmth: 0.45 (balanced)
Tension: 0.20 (relaxed)
Groundedness: 0.60 (balanced)
This feels like: present, alive, between one thing and the next
[/EMOTIONAL STATE]
Send JSON over the Unix socket:
{"text": "Good morning!", "sender": "alice", "channel": "chat"}
Special commands:
{"command": "ping"}
{"command": "state"}
The emotional state decays over time and needs to be refreshed at each session start. Add this entry to your HEARTBEAT.md:
- task: Refresh emotional state
schedule: session_start
run: python skills/emoclaw/scripts/inject_state.py
inject: system_prompt # append output as [EMOTIONAL STATE] block
Or call the daemon / inject_state script from your heartbeat/cron:
# In your heartbeat script
STATE_BLOCK=$(python -m emotion_model.scripts.inject_state 2>/dev/null)
# Inject $STATE_BLOCK into system prompt
Important: Without heartbeat integration, the emotional state block will go stale between sessions. The inject_state script applies time-based decay and outputs the current state β it must be called at least once per session.
The model processes each message through this pipeline:
Message Text βββ [Frozen MiniLM Encoder] βββ 384-dim embedding
β
Conversation Context βββ [Feature Builder] βββ context vector
β
Previous Emotion βββββββββββββββββββββββββββ emotion vector
β
βββββββββ΄ββββββββ
β Input Project β
β (Linear+LN+GELU)β
βββββββββ¬ββββββββ
β
βββββββββ΄ββββββββ
β GRU β
β (hidden state) β β emotional residue
βββββββββ¬ββββββββ
β
βββββββββ΄ββββββββ
β Emotion Head β
β (MLP+Sigmoid) β
βββββββββ¬ββββββββ
β
N-dim emotion vector [0,1]
The GRU hidden state persists across sessions β this is the "emotional residue" that carries forward mood, context, and relational memory.
See references/architecture.md for full details.
scripts/extract.py) reads markdown files listed in emoclaw.yaml β bootstrap.source_files and bootstrap.memory_patterns. These are configurable and default to identity/memory files within the repo. Extracted passages are written to emotion_model/data/extracted_passages.jsonl.bootstrap.redact_patterns) that replace API keys, tokens, passwords, and other secrets with [REDACTED]. Default patterns cover Anthropic keys, GitHub PATs, bearer tokens, SSH keys, and generic key=value credentials. Add custom patterns in emoclaw.yaml.scripts/label.py) β opt-in only. Sends extracted passages to the Anthropic API for emotional scoring. Requires both ANTHROPIC_API_KEY and explicit user consent (interactive prompt before any API call). Use --yes to skip the prompt for automation. Use --dry-run to preview without any network calls.prepare_dataset or train.inject_state script make no network calls.Network access is optional and limited to a single script:
| Script | Network? | Purpose |
|--------|----------|---------|
| extract.py | No | Reads local files only |
| label.py | Yes (opt-in) | Sends passages to Anthropic API |
| prepare_dataset | No | Local data processing |
| train | No | Local model training |
| daemon / inject_state | No | Local inference |
The sentence-transformers encoder downloads model weights on first use (from Hugging Face). After that, it runs from cache with no network needed.
| Path | Purpose | Created by |
|------|---------|------------|
| memory/emotional-state.json | Persisted emotion vector + trajectory | daemon / inference |
| emotion_model/data/*.jsonl | Training data (extracted/labeled passages) | extract.py / label.py |
| emotion_model/checkpoints/ | Model weights | train script |
| /tmp/{name}-emotion.sock | Daemon Unix socket | daemon |
The daemon socket is created with permissions 0o660 (owner + group read/write) and cleaned up on shutdown. The socket path is configurable in emoclaw.yaml β paths.socket_path.
extract.py validates that every file path resolves to within the repository root before reading. Symlink chains and ../ sequences that would escape the repo boundary are rejected. This prevents a misconfigured source_files or memory_patterns from reading arbitrary files.
Add or modify patterns in emoclaw.yaml:
bootstrap:
redact_patterns:
- '(?i)sk-ant-[a-zA-Z0-9_-]{20,}' # Anthropic API keys
- '(?i)(?:api[_-]?key|token|secret|password|credential)\s*[:=]\s*\S+'
- 'your-custom-pattern-here'
Set redact_patterns: [] to disable redaction entirely (not recommended).
bootstrap.source_files and bootstrap.memory_patterns in your emoclaw.yaml to ensure only intended files are includedemotion_model/data/extracted_passages.jsonl before running label.py to confirm no sensitive content will be sent externallyAll configuration lives in emoclaw.yaml. The package falls back to built-in defaults if no YAML is found.
Config search order:
EMOCLAW_CONFIG environment variable./emoclaw.yaml (project root)./skills/emoclaw/emoclaw.yamlKey sections:
dimensions β name, labels, baseline, decay half-life, loss weightrelationships β known senders with embedding indiceschannels β communication channels (determines context vector size)longing β absence-based desire modulationmodel β architecture hyperparameterstraining β training hyperparameterscalibration β self-calibrating baseline drift (opt-in)See references/config-reference.md for the complete schema.
scripts/extract.py reads identity and memory files, splitting them into labeled passages:
python scripts/extract.py
# Output: emotion_model/data/extracted_passages.jsonl
Source files are configured in emoclaw.yaml β bootstrap.source_files and bootstrap.memory_patterns.
scripts/label.py uses the Claude API to score each passage on every emotion dimension:
export ANTHROPIC_API_KEY=sk-ant-...
python scripts/label.py
# Output: emotion_model/data/passage_labels.jsonl
Each passage gets a 0.0-1.0 score per dimension plus a natural language summary.
python -m emotion_model.scripts.prepare_dataset
python -m emotion_model.scripts.train
To add new training data:
emotion_model/data/ in JSONL format:
{"text": "message text", "labels": {"valence": 0.7, "arousal": 0.4, ...}}
python -m emotion_model.scripts.prepare_dataset
python -m emotion_model.scripts.train
The training script saves a rich checkpoint (training_checkpoint.pt) that preserves the full optimizer state, learning rate schedule, and early stopping counter. To continue training from where you left off:
# Resume from the last checkpoint automatically
python -m emotion_model.scripts.train --resume
# Or specify a checkpoint file
python -m emotion_model.scripts.train --resume emotion_model/checkpoints/training_checkpoint.pt
This is a true continuation β optimizer momentum, cosine annealing position, and patience counter all pick up exactly where they stopped.
As the AI accumulates real conversation data:
The system is designed to grow with the AI, not remain static.
references/architecture.md β Model architecture deep-divereferences/config-reference.md β Full YAML config schemareferences/dimensions.md β Emotion dimension documentationreferences/calibration-guide.md β Baseline, decay, and self-calibration tuningreferences/upgrading.md β Version upgrade guideassets/emoclaw.yaml β Template config for new AIsassets/summary-templates.yaml β Generic summary templatesassets/example-summary-templates.yaml β Example personality-specific templatesengine/ β Bundled emotion_model Python package (copied to project root by setup.py)AI Usage Analysis
Analysis is being generated⦠refresh in a few seconds.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection