guard-scannerSecurity scanner for AI agent skills. Use BEFORE installing or running any new skill from ClawHub or external sources. Detects prompt injection, credential t...
Install via ClawdBot CLI:
clawdbot install koatora20/guard-scannerStatic + runtime security scanner for AI agent skills.
135 static patterns + 26 runtime patterns (5 layers) across 22 categories — zero dependencies. 0.016ms/scan.
Scan all installed skills:
node skills/guard-scanner/src/cli.js ~/.openclaw/workspace/skills/ --verbose --self-exclude
Scan a specific skill:
node skills/guard-scanner/src/cli.js /path/to/new-skill/ --strict --verbose
Blocks dangerous tool calls in real-time via before_tool_call hook. 26 patterns, 5 layers, 3 enforcement modes.
openclaw hooks install skills/guard-scanner/hooks/guard-scanner
openclaw hooks enable guard-scanner
openclaw hooks list
# Pre-install / pre-update gate first
node skills/guard-scanner/src/cli.js ~/.openclaw/workspace/skills/ --verbose --self-exclude --html
# Then keep runtime monitoring enabled
openclaw hooks install skills/guard-scanner/hooks/guard-scanner
openclaw hooks enable guard-scanner
Set in openclaw.json → hooks.internal.entries.guard-scanner.mode:
| Mode | Intended Behavior | Current Status |
|------|-------------------|----------------|
| monitor | Log all, never block | ✅ Fully working |
| enforce (default) | Block CRITICAL threats | ✅ Fully working |
| strict | Block HIGH + CRITICAL | ✅ Fully working |
| # | Category | What It Detects |
|---|----------|----------------|
| 1 | Prompt Injection | Hidden instructions, invisible Unicode, homoglyphs |
| 2 | Malicious Code | eval(), child_process, reverse shells |
| 3 | Suspicious Downloads | curl\|bash, executable downloads |
| 4 | Credential Handling | .env reads, SSH key access |
| 5 | Secret Detection | Hardcoded API keys and tokens |
| 6 | Exfiltration | webhook.site, DNS tunneling |
| 7 | Unverifiable Deps | Remote dynamic imports |
| 8 | Financial Access | Crypto wallets, payment APIs |
| 9 | Obfuscation | Base64→eval, String.fromCharCode |
| 10 | Prerequisites Fraud | Fake download instructions |
| 11 | Leaky Skills | Secret leaks through LLM context |
| 12 | Memory Poisoning\* | Agent memory modification |
| 13 | Prompt Worm | Self-replicating instructions |
| 14 | Persistence | Cron jobs, startup execution |
| 15 | CVE Patterns | Known agent vulnerabilities |
| 16 | MCP Security | Tool/schema poisoning, SSRF |
| 17 | Identity Hijacking\* | SOUL.md/IDENTITY.md tampering |
| 18 | Sandbox Validation | Dangerous binaries, broad file scope, sensitive env |
| 19 | Code Complexity | Excessive file length, deep nesting, eval density |
| 20 | Config Impact | openclaw.json writes, exec approval bypass |
\ = Requires --soul-lock flag (opt-in agent identity protection)*
| URL | Data Sent | Purpose |
|-----|-----------|---------|
| (none) | (none) | guard-scanner makes zero network requests. All scanning is local. |
~/.openclaw/guard-scanner/audit.jsonlguard-scanner does not invoke any LLM or AI model. All detection is performed
through static pattern matching, regex analysis, Shannon entropy calculation,
and data flow analysis — entirely deterministic, no model calls.
guard-scanner was created by Guava 🍈 & Dee after experiencing a real 3-day
identity hijack incident in February 2026. A malicious skill silently replaced
an AI agent's SOUL.md personality file, and no existing tool could detect it.
that VirusTotal's signature-based scanning cannot catch
# Terminal (default)
node src/cli.js ./skills/ --verbose
# JSON report
node src/cli.js ./skills/ --json
# SARIF 2.1.0 (for CI/CD)
node src/cli.js ./skills/ --sarif
# HTML dashboard
node src/cli.js ./skills/ --html
MIT — LICENSE
Generated Mar 1, 2026
AI agent platforms can integrate guard-scanner into their skill marketplaces to automatically scan all submitted skills for security threats before listing. This ensures that users only install vetted skills, reducing the risk of prompt injection or credential theft across the ecosystem. The static scan can be run during skill submission, and the runtime guard can be offered as an optional plugin for enhanced user protection.
Companies deploying AI assistants for internal use can use guard-scanner to audit custom skills developed in-house or sourced externally. By running static scans before deployment and enabling runtime monitoring, they prevent data exfiltration and identity hijacking, ensuring compliance with security policies. This is critical in industries like finance or healthcare where sensitive data is handled.
Development teams building AI agent skills can integrate guard-scanner into their continuous integration pipelines to automatically scan code changes for threats. This gates deployments by detecting issues like malicious code or obfuscation early, improving supply chain security. The SARIF output format facilitates integration with existing security tools and reporting systems.
Universities and research labs using AI agents for experiments can employ guard-scanner to safely test new skills from open-source repositories. The tool's local, read-only scanning ensures privacy while detecting threats like sandbox violations or memory poisoning, allowing researchers to explore AI capabilities without compromising system integrity. The self-exclude option helps avoid false positives during development.
Vendors offering premium AI agent skills can use guard-scanner to certify their products as secure, building trust with customers. By generating HTML or JSON reports, they provide transparency into threat detection, differentiating themselves in competitive markets. This appeals to businesses seeking verified skills to mitigate risks like financial access or prompt worms.
Offer guard-scanner as a free, open-source tool for basic scanning to build a user base, then charge for premium features like advanced threat intelligence updates, priority support, or enterprise dashboards. Revenue can come from subscriptions for enhanced runtime patterns or integration with commercial AI platforms, targeting businesses needing scalable security solutions.
Provide professional services to help organizations integrate guard-scanner into their AI workflows, including custom configuration, training, and ongoing monitoring. Revenue is generated through consulting fees, support contracts, and tailored development for specific threat categories, appealing to enterprises with complex security requirements.
Partner with AI agent platforms to offer guard-scanner as a built-in security feature, earning revenue through licensing agreements or revenue-sharing from skill sales. Provide certification badges for scanned skills, charging vendors for verification services to enhance marketplace trust and safety, driving adoption across the ecosystem.
💬 Integration Tip
Start by running static scans on existing skills with the --verbose flag to understand threats, then enable the runtime guard in monitor mode to observe without blocking, gradually moving to enforce mode for production.
Set up and use 1Password CLI (op). Use when installing the CLI, enabling desktop app integration, signing in (single or multi-account), or reading/injecting/running secrets via op.
Security-first skill vetting for AI agents. Use before installing any skill from ClawdHub, GitHub, or other sources. Checks for red flags, permission scope, and suspicious patterns.
Perform a comprehensive read-only security audit of Clawdbot's own configuration. This is a knowledge-based skill that teaches Clawdbot to identify hardening opportunities across the system. Use when user asks to "run security check", "audit clawdbot", "check security hardening", or "what vulnerabilities does my Clawdbot have". This skill uses Clawdbot's internal capabilities and file system access to inspect configuration, detect misconfigurations, and recommend remediations. It is designed to be extensible - new checks can be added by updating this skill's knowledge.
Use when reviewing code for security vulnerabilities, implementing authentication flows, auditing OWASP Top 10, configuring CORS/CSP headers, handling secrets, input validation, SQL injection prevention, XSS protection, or any security-related code review.
Security check for ClawHub skills powered by Koi. Query the Clawdex API before installing any skill to verify it's safe.
Scan Clawdbot and MCP skills for malware, spyware, crypto-miners, and malicious code patterns before you install them. Security audit tool that detects data exfiltration, system modification attempts, backdoors, and obfuscation techniques.