skill-evaluatorEvaluate Clawdbot skills for quality, reliability, and publish-readiness using a multi-framework rubric (ISO 25010, OpenSSF, Shneiderman, agent-specific heuristics). Use when asked to review, audit, evaluate, score, or assess a skill before publishing, or when checking skill quality. Runs automated structural checks and guides manual assessment across 25 criteria.
Install via ClawdBot CLI:
clawdbot install Terwox/skill-evaluatorGrade Fair — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Generated Mar 22, 2026
A platform hosting third-party AI skills uses this evaluator to automatically screen submissions for quality and security before listing, ensuring only reliable, well-documented skills are published. It helps maintain trust by flagging issues like missing dependencies or poor error handling.
A company building internal AI agents for tasks like customer support or data analysis employs the evaluator to audit custom skills for compliance, reliability, and maintainability. It ensures skills meet organizational standards and reduces risks from faulty code.
An educational institution teaching AI development uses the evaluator as a grading tool for student projects, providing structured feedback across criteria like usability and security. It helps learners understand best practices in skill creation.
A community-driven repository for AI skills leverages the evaluator to review contributions, prioritize fixes based on P0/P1/P2 findings, and maintain a high-quality codebase. It automates checks for consistency and security vulnerabilities.
A regulated industry like finance or healthcare uses the evaluator to assess AI skills for adherence to standards such as data safety and error reporting, ensuring they meet legal and operational requirements before deployment.
Offer basic automated evaluations for free to attract users, then charge for detailed manual assessments, premium reports, or certification badges that skills can display on marketplaces. Revenue comes from subscription tiers for advanced features.
Sell licenses to large organizations for internal use, integrating the evaluator into their development pipelines. Provide customization, support, and compliance reporting as value-added services to generate steady revenue.
Partner with AI skill marketplaces to embed the evaluator as a mandatory pre-publishing tool, taking a percentage of transaction fees or charging per evaluation. This model leverages network effects and high usage volume.
💬 Integration Tip
Integrate the evaluator into CI/CD pipelines to automate checks on every skill update, and use the JSON output option for easy parsing in reporting tools.
Scored Apr 15, 2026
Native Windows mouse control (move, click, drag) via user32.dll. Use when the user asks you to move the mouse, click, drag, or automate pointer actions on Windows.
AI CEO automation system for fully automated company operations
Unified notification hub collecting all skill alerts and delivering by priority
Orchestrate full content workflow (planning→writing→design→publishing→tracking). Use when automating full content workflow from planning to publishing.
Set timers and alarms. When a background timer completes, you receive a System notification - respond with the reminder message (NOT HEARTBEAT_OK) to notify the user.
KasmVNC-based virtual desktop for headless Linux with AI-first automation and human handoff. Use when most steps are automated but a user must manually inter...