clawatarGive your AI agent a 3D VRM avatar body with animations, expressions, voice chat, and lip sync. Use when the user wants a visual avatar, VRM viewer, avatar companion, VTuber-style character, or 3D character they can talk to. Installs a web-based viewer controllable via WebSocket.
Install via ClawdBot CLI:
clawdbot install Dongping-Chen/clawatarGive your AI agent a body. Web-based VRM avatar with 162 animations, expressions, TTS lip sync, and AI chat.
# Clone and install
git clone https://github.com/Dongping-Chen/Clawatar.git ~/.openclaw/workspace/clawatar
cd ~/.openclaw/workspace/clawatar && npm install
# Start (Vite + WebSocket server)
npm run start
Opens at http://localhost:3000 with WS control at ws://localhost:8765.
Users must provide their own VRM model (drag & drop onto page, or set model.url in clawatar.config.json).
Send JSON to ws://localhost:8765:
{"type": "play_action", "action_id": "161_Waving"}
{"type": "set_expression", "name": "happy", "weight": 0.8}
Expressions: happy, angry, sad, surprised, relaxed
{"type": "speak", "text": "Hello!", "action_id": "161_Waving", "expression": "happy"}
{"type": "reset"}
| Mood | Action ID |
|------|-----------|
| Greeting | 161_Waving |
| Happy | 116_Happy Hand Gesture |
| Thinking | 88_Thinking |
| Agreeing | 118_Head Nod Yes |
| Disagreeing | 144_Shaking Head No |
| Laughing | 125_Laughing |
| Sad | 142_Sad Idle |
| Dancing | 105_Dancing, 143_Samba Dancing, 164_Ymca Dance |
| Thumbs Up | 153_Standing Thumbs Up |
| Idle | 119_Idle |
Full list: public/animations/catalog.json (162 animations)
cd ~/.openclaw/workspace/clawatar && node -e "
const W=require('ws'),s=new W('ws://localhost:8765');
s.on('open',()=>{s.send(JSON.stringify({type:'speak',text:'Hello!',action_id:'161_Waving',expression:'happy'}));setTimeout(()=>s.close(),1000)})
"
Edit clawatar.config.json for ports, voice settings, model URL. TTS requires ElevenLabs API key in env (ELEVENLABS_API_KEY) or ~/.openclaw/openclaw.json under skills.entries.sag.apiKey.
AI Usage Analysis
Analysis is being generatedโฆ refresh in a few seconds.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection