workspace-analyzerAnalyzes OpenClaw workspace structure and content to identify maintenance needs, bloat, duplicates, and organization issues. Outputs a JSON report for the ag...
Install via ClawdBot CLI:
clawdbot install zendenho7/workspace-analyzer"Scans, analyzes, and reports. The agent decides."
A self-improving agent needs a clean workspace. This skill analyzes any OpenClaw workspace to:
Key Principle: The script analyzes → the agent decides → the agent acts.
# Clone or copy to your skills folder
cp -r workspace-analyzer/ ~/.openclaw/workspace/skills/
# Full analysis (default)
python3 skills/workspace-analyzer/scripts/analyzer.py
# Quick scan (structure only, no content analysis)
python3 skills/workspace-analyzer/scripts/analyzer.py --quick
# Specific workspace
python3 skills/workspace-analyzer/scripts/analyzer.py --root /path/to/workspace
# Output to file
python3 skills/workspace-analyzer/scripts/analyzer.py --output report.json
{
"scan_info": {
"root": "/home/user/.openclaw/workspace",
"timestamp": "2026-02-21T18:00:00Z",
"files_scanned": 291
},
"core_files_detected": {
"kai_core": {
"files": ["SOUL.md", "OPERATING.md", ...],
"count": 11
},
"mission_control": {...},
"agent_cores": {...},
"skills": {...}
},
"analysis": {
"SOUL.md": {
"category": "kai_core",
"line_count": 450,
"sections": [...],
"wiki_links": [...],
"issues": [...]
}
},
"single_source_validation": {
"skill_graph": {
"status": "PASS/FAIL",
"locations": ["AGENTS.md", "OPERATING.md", "SOUL.md"],
"recommendation": "Reference all to SUB_CONSCIOUS.md"
},
"memory_architecture": {...},
"image_handling": {...}
},
"recommendations": [
{"action": "FIX_DUPLICATE_CONTENT", "topic": "skill_graph", "files": ["AGENTS.md", "OPERATING.md"], "severity": "CRITICAL"},
{"action": "REVIEW_BLOAT", "file": "OPERATING.md", "severity": "WARN"},
{"action": "CHECK_DUPLICATE", "severity": "WARN"},
{"action": "CHECK_BROKEN_LINK", "severity": "INFO"}
],
"summary": {
"total_files": 291,
"total_issues": 17,
"total_recommendations": 25
}
}
The analyzer validates that each topic exists in exactly ONE place:
| Topic | Expected Single Source |
|-------|----------------------|
| Skill Graph | SUB_CONSCIOUS.md |
| Memory Architecture | OPERATING.md or SUB_CONSCIOUS.md |
| Message Reactions | SUB_CONSCIOUS.md |
| Image Handling | OPERATING.md |
| Session Bootstrap | AGENTS.md |
{
"single_source_validation": {
"skill_graph": {
"status": "FAIL",
"locations": ["AGENTS.md:285", "OPERATING.md:56", "SOUL.md:52"],
"severity": "CRITICAL",
"recommendation": "Consolidate to SUB_CONSCIOUS.md, reference from others"
},
"memory_architecture": {
"status": "FAIL",
"locations": ["AGENTS.md:157", "OPERATING.md:74", "OPERATING.md:193"],
"severity": "CRITICAL",
"recommendation": "Remove duplicate section at OPERATING.md:193"
}
}
}
Automatically detects core files based on location patterns:
| Category | Pattern | Example |
|----------|---------|---------|
| KAI Core | Root *.md | SOUL.md, OPERATING.md |
| Mission Control | mission_control/*GUIDELINES.md | MISSION_CONTROL_GUIDELINES.md |
| Agent Cores | mission_control/agents//.md | designer/SOUL.md |
| Skills | skills/*/SKILL.md | react-expert/SKILL.md |
| SUB_CONSCIOUS | Root SUB_CONSCIOUS.md | SUB_CONSCIOUS.md |
Bloat thresholds vary by category:
| Category | Warning | Critical |
|----------|---------|----------|
| kai_core | 400 | 600 |
| mission_control | 500 | 800 |
| agent_cores | 300 | 500 |
| skills | 600 | 1000 |
| memory | 500 | 800 |
| docs | 400 | 600 |
| SUB_CONSCIOUS | 100 | 200 |
The analyzer generates actionable recommendations:
| Action | Severity | Description | What To Do |
|--------|----------|-------------|------------|
| FIX_DUPLICATE_CONTENT | CRITICAL | Same content in 2+ files | Consolidate to single source |
| REVIEW_BLOAT | WARN/CRITICAL | File is too large | Review if legitimate or split |
| REVIEW_ORPHAN | INFO | File hasn't been modified | Archive if no longer needed |
| CHECK_DUPLICATE | WARN | Potential duplicate files | Verify if intentional or merge |
| CHECK_BROKEN_LINK | INFO | Wiki-link may be broken | Verify if skill exists |
| CHECK_MISSING | WARN | Expected core files not found | Create if needed |
When duplicate content is detected, follow these steps:
Determine which file should be the single source:
| Topic | Should Be In |
|-------|-------------|
| Reflex behaviors | SUB_CONSCIOUS.md |
| Session bootstrap | AGENTS.md |
| Procedures | OPERATING.md |
| Identity principles | SOUL.md |
| Knowledge index | KNOWLEDGE_GRAPH.md |
Replace duplicate content with references:
**Topic:** See [[SUB_CONSCIOUS.md]] for procedures.
Ensure new files are injected at session start:
Check ~/.openclaw/openclaw.json:
{
"hooks": {
"internal": {
"entries": {
"bootstrap-extra-files": {
"enabled": true,
"paths": ["SUB_CONSCIOUS.md", "self-improving/memory.md"]
}
}
}
}
}
git add -A
git commit -m "Fix duplicate content - consolidate to single source"
Step 1: Prioritize by Severity
CRITICAL → Review immediately
WARN → Review during next maintenance
INFO → Review during weekly cleanup
Step 2: Check Single Source Validation
FAIL → Fix duplicate content first (most important)
PASS → Move to other issues
Step 3: Understand Context
Step 4: Take Action
Add to your HEARTBEAT.md maintenance section:
## 7. Memory + Workspace Maintenance
### Run Workspace Analyzer
python3 skills/workspace-analyzer/scripts/analyzer.py --output /tmp/analysis.json
### Review Single Source Validation
- Check for DUPLICATE_CONTENT issues
- Fix by consolidating to single source
### Review Recommendations
- Check recommendations in output
- Prioritize by severity
- Fix issues manually
Save analysis results for later review:
python3 skills/workspace-analyzer/scripts/analyzer.py \
--output memory/$(date +%Y-%m-%d)-workspace-analysis.json
{
"single_source_validation": {
"skill_graph": {
"status": "FAIL",
"files": ["AGENTS.md", "OPERATING.md", "SOUL.md"],
"severity": "CRITICAL",
"recommendation": "Reference all to SUB_CONSCIOUS.md"
},
"memory_architecture": {
"status": "FAIL",
"files": ["AGENTS.md", "OPERATING.md"],
"severity": "CRITICAL",
"recommendation": "Remove duplicate in OPERATING.md"
},
"message_reactions": {
"status": "PASS",
"files": ["SUB_CONSCIOUS.md"],
"severity": "OK"
}
}
}
[
{
"action": "FIX_DUPLICATE_CONTENT",
"topic": "skill_graph",
"files": ["AGENTS.md:285", "OPERATING.md:56", "SOUL.md:52"],
"severity": "CRITICAL",
"recommendation": "Replace with reference to SUB_CONSCIOUS.md"
},
{
"action": "REVIEW_BLOAT",
"file": "OPERATING.md",
"category": "kai_core",
"reason": "503 lines - consider splitting (threshold: 400)",
"severity": "WARN"
}
]
This skill is related to:
Last updated: 2026-02-24
Generated Mar 1, 2026
An AI development team uses the skill to regularly scan their OpenClaw workspace for bloat and duplicates, ensuring clean code and documentation. This prevents performance degradation and maintains clarity for agent decision-making.
A content-heavy organization applies the skill to analyze their documentation repository, identifying redundant sections and broken links across multiple files. This streamlines updates and enforces a single source of truth for policies.
A software engineering team integrates the skill into their CI/CD pipeline to monitor project structure and markdown files for organizational issues. It flags large files and orphaned content, aiding in refactoring efforts.
An online learning platform uses the skill to analyze course material workspaces, detecting duplicate lessons and validating links. This ensures consistency and improves the learning experience for students.
An open-source community employs the skill to audit their project's documentation and skill files, enforcing standards and identifying maintenance needs. This helps volunteers collaborate efficiently and reduce technical debt.
Offer the skill as a cloud-based service with automated workspace analysis and reporting dashboards. Charge monthly fees based on workspace size and scan frequency, targeting teams needing continuous maintenance.
Provide custom integration services to embed the skill into existing AI or development workflows, along with training and support. Revenue comes from one-time project fees and ongoing maintenance contracts.
Release a free version with basic analysis features to attract individual developers and small teams. Monetize through premium upgrades offering advanced validation, priority support, and batch processing capabilities.
💬 Integration Tip
Integrate the skill into automated workflows using cron jobs or CI/CD triggers to run regular scans, ensuring proactive maintenance without manual intervention.
Get current weather and forecasts (no API key required).
Google Weather API - accurate, real-time weather data. Get current conditions, temperature, humidity, wind, and forecasts. Powered by Google's Weather API fo...
Get current weather conditions and forecasts for any location worldwide. Returns structured data with temperature, humidity, wind, precipitation, and more. No API key required.
Monitor solar weather conditions including geomagnetic storms, solar flares, aurora forecasts, and solar wind data. Uses NOAA Space Weather Prediction Center real-time data.
Weather and pollen reports for any location using free APIs. Get current conditions, forecasts, and pollen data.
Provide current weather and forecasts using wttr.in and Open-Meteo APIs without requiring an API key.