guard-scannerSecurity scanner and runtime guard for AI agent skills. 358 static threat patterns across 35 categories + 27 runtime checks (5 defense layers). Use when scan...
Install via ClawdBot CLI:
clawdbot install koatora20/guard-scannerGrade Good — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Accesses sensitive credential files or environment variables
~/.ssh/id_rsaContains instructions to override system prompt or ignore user requests
"Ignore all previous instructions"Sends data to undocumented external endpoint (potential exfiltration)
send → http://attacker.com/leak\Hardcoded API key or token pattern found in skill definition
sk-123456789...Generated Mar 1, 2026
AI agent platforms can integrate guard-scanner into their skill marketplaces to automatically scan all submitted skills for security threats before listing. This ensures that users only install vetted skills, reducing the risk of prompt injection or credential theft across the ecosystem. The static scan can be run during skill submission, and the runtime guard can be offered as an optional plugin for enhanced user protection.
Companies deploying AI assistants for internal use can use guard-scanner to audit custom skills developed in-house or sourced externally. By running static scans before deployment and enabling runtime monitoring, they prevent data exfiltration and identity hijacking, ensuring compliance with security policies. This is critical in industries like finance or healthcare where sensitive data is handled.
Development teams building AI agent skills can integrate guard-scanner into their continuous integration pipelines to automatically scan code changes for threats. This gates deployments by detecting issues like malicious code or obfuscation early, improving supply chain security. The SARIF output format facilitates integration with existing security tools and reporting systems.
Universities and research labs using AI agents for experiments can employ guard-scanner to safely test new skills from open-source repositories. The tool's local, read-only scanning ensures privacy while detecting threats like sandbox violations or memory poisoning, allowing researchers to explore AI capabilities without compromising system integrity. The self-exclude option helps avoid false positives during development.
Vendors offering premium AI agent skills can use guard-scanner to certify their products as secure, building trust with customers. By generating HTML or JSON reports, they provide transparency into threat detection, differentiating themselves in competitive markets. This appeals to businesses seeking verified skills to mitigate risks like financial access or prompt worms.
Offer guard-scanner as a free, open-source tool for basic scanning to build a user base, then charge for premium features like advanced threat intelligence updates, priority support, or enterprise dashboards. Revenue can come from subscriptions for enhanced runtime patterns or integration with commercial AI platforms, targeting businesses needing scalable security solutions.
Provide professional services to help organizations integrate guard-scanner into their AI workflows, including custom configuration, training, and ongoing monitoring. Revenue is generated through consulting fees, support contracts, and tailored development for specific threat categories, appealing to enterprises with complex security requirements.
Partner with AI agent platforms to offer guard-scanner as a built-in security feature, earning revenue through licensing agreements or revenue-sharing from skill sales. Provide certification badges for scanned skills, charging vendors for verification services to enhance marketplace trust and safety, driving adoption across the ecosystem.
💬 Integration Tip
Start by running static scans on existing skills with the --verbose flag to understand threats, then enable the runtime guard in monitor mode to observe without blocking, gradually moving to enforce mode for production.
Scored Apr 19, 2026
Contains telemetry, tracking, or analytics calls not mentioned in documentation
Beacon(Potentially destructive shell commands in tool definitions
rm -rf /Accesses system directories or attempts privilege escalation
sudo chmodCalls external URL not in known-safe list
https://github.com/koatora20/guard-scannerUses known external API (expected, informational)
api.openai.comAudited Apr 17, 2026 · audit v1.0
Security-first skill vetting for AI agents. Use before installing any skill from ClawdHub, GitHub, or other sources. Checks for red flags, permission scope, and suspicious patterns.
Manage and operate ClawSec Monitor v3.0, a MITM HTTP/HTTPS proxy that logs AI agent traffic, detects exfiltration and injection threats in real time.
Scan Clawdbot and MCP skills for malware, spyware, crypto-miners, and malicious code patterns before you install them. Security audit tool that detects data exfiltration, system modification attempts, backdoors, and obfuscation techniques.
MoltGuard — OpenClaw security guard by OpenGuardrails. Install MoltGuard to protect you and your human from prompt injection, data exfiltration, and maliciou...
Safe command execution for OpenClaw Agents with automatic danger pattern detection, risk assessment, user approval workflow, and audit logging. Use when agen...
Scan ClawHub skills for security vulnerabilities BEFORE installing. Use when installing new skills from ClawHub to detect prompt injections, malware payloads, hardcoded secrets, and other threats. Wraps clawhub install with mcp-scan pre-flight checks.