safety-checksVerify before you trust — model pinning, fallbacks, and runtime safety validation
Install via ClawdBot CLI:
clawdbot install leegitw/safety-checksGrade Fair — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Calls external URL not in known-safe list
https://github.com/live-neon/skills/tree/main/agentic/safety-checksAudited Apr 18, 2026 · audit v1.0
Generated Mar 22, 2026
Ensures the support chatbot consistently uses the correct model version to maintain response quality. Validates fallback configurations to guarantee degraded service paths exist if primary models fail, preventing complete system outages during high traffic.
Verifies model pinning to prevent unintended model drift that could lead to risky trading decisions. Checks cache staleness to ensure trading algorithms use up-to-date market data, avoiding losses from outdated information.
Validates model version consistency to maintain diagnostic accuracy and regulatory compliance. Detects cross-session state contamination to protect patient data privacy and prevent diagnostic errors from leaked session data.
Monitors cache freshness to ensure product recommendations reflect current inventory and user preferences. Audits fallback chains to maintain service availability during peak shopping seasons, preventing revenue loss from downtime.
Checks model pinning to ensure simulation environments use the intended AI models for safety testing. Validates session hygiene to prevent state leakage between simulation runs, ensuring accurate and isolated test results.
Offers the safety-checks skill as part of a premium AI agent platform subscription. Provides regular updates and support for configuration management, targeting enterprises needing reliable AI operations with predictable costs.
Provides professional services to integrate the skill into existing AI systems, including custom configuration and training. Focuses on industries with high compliance needs, such as finance and healthcare, to ensure safety standards are met.
Offers basic safety checks for free to attract users, with advanced features like strict model pinning and automated enforcement available in paid tiers. Drives adoption among developers and small teams before upselling to larger organizations.
💬 Integration Tip
Start by installing the skill and configuring basic model pinning in your .openclaw/safety-checks.yaml file, then gradually add fallback and cache checks as needed.
Scored Apr 19, 2026
Security vetting protocol before installing any AI agent skill. Red flag detection for credential theft, obfuscated code, exfiltration. Risk classification L...
Security-first skill vetting for AI agents. Use before installing any skill from ClawdHub, GitHub, or other sources. Checks for red flags, permission scope,...
Comprehensive security auditing for Clawdbot deployments. Scans for exposed credentials, open ports, weak configs, and vulnerabilities. Auto-fix mode included.
Audit codebases and infrastructure for security issues. Use when scanning dependencies for vulnerabilities, detecting hardcoded secrets, checking OWASP top 10 issues, verifying SSL/TLS, auditing file permissions, or reviewing code for injection and auth flaws.
Audit a user's current AI tool stack. Score each tool by ROI, identify redundancies, gaps, and upgrade opportunities. Produces a structured report with score...
Detect anomalies and outliers in construction data: unusual costs, schedule variances, productivity spikes. Statistical and ML-based detection methods.