verification-before-completionUse when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always
Install via ClawdBot CLI:
clawdbot install zlc000190/verification-before-completionClaiming work is complete without verification is dishonesty, not efficiency.
Core principle: Evidence before claims, always.
Violating the letter of this rule is violating the spirit of this rule.
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
If you haven't run the verification command in this message, you cannot claim it passes.
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
- If NO: State actual status with evidence
- If YES: State claim WITH evidence
5. ONLY THEN: Make the claim
Skip any step = lying, not verifying
| Claim | Requires | Not Sufficient |
|-------|----------|----------------|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | Linter output: 0 errors | Partial check, extrapolation |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |
| Excuse | Reality |
|--------|---------|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence ā evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter ā compiler |
| "Agent said success" | Verify independently |
| "I'm tired" | Exhaustion ā excuse |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |
Tests:
ā
[Run test command] [See: 34/34 pass] "All tests pass"
ā "Should pass now" / "Looks correct"
Regression tests (TDD Red-Green):
ā
Write ā Run (pass) ā Revert fix ā Run (MUST FAIL) ā Restore ā Run (pass)
ā "I've written a regression test" (without red-green verification)
Build:
ā
[Run build] [See: exit 0] "Build passes"
ā "Linter passed" (linter doesn't check compilation)
Requirements:
ā
Re-read plan ā Create checklist ā Verify each ā Report gaps or completion
ā "Tests pass, phase complete"
Agent delegation:
ā
Agent reports success ā Check VCS diff ā Verify changes ā Report actual state
ā Trust agent report
From 24 failure memories:
ALWAYS before:
Rule applies to:
No shortcuts for verification.
Run the command. Read the output. THEN claim the result.
This is non-negotiable.
Generated Mar 1, 2026
Before marking a bug fix as complete in a sprint, developers must run the specific test suite for that bug and verify zero failures. This ensures no false claims in stand-up meetings or pull requests, preventing wasted time on rework.
In a CI/CD workflow, before merging code to the main branch, engineers must execute full build and lint commands locally and confirm exit code 0. This prevents broken builds from being deployed, maintaining system reliability and team trust.
QA testers must run regression tests after each code change, including a red-green cycle to verify failures before fixes. This ensures comprehensive coverage and avoids shipping incomplete features that could lead to customer complaints.
Before declaring a project phase complete, managers must review a line-by-line checklist against requirements and run verification commands like integration tests. This prevents missed deliverables and ensures stakeholder satisfaction.
When delegating tasks to AI agents, users must independently verify agent-reported success by checking version control diffs and running verification commands. This prevents trust issues and ensures accurate task completion in automated workflows.
Offer a subscription-based platform that enforces verification workflows, integrating with tools like GitHub and Jira to automate evidence collection before completion claims. Revenue comes from monthly per-user fees, targeting mid-sized tech companies.
Provide workshops and audits to help organizations implement verification protocols, reducing rework and improving trust. Revenue is generated through hourly consulting rates and packaged training programs for industries like finance and healthcare.
Sell a custom verification software solution to large enterprises needing strict compliance in regulated sectors, with features for audit trails and real-time monitoring. Revenue comes from annual licensing contracts and premium support.
š¬ Integration Tip
Integrate this skill into existing CI/CD pipelines by adding mandatory verification steps before merge approvals, using tools like Jenkins or GitLab CI to automate evidence checks.
Drift detection + baseline integrity guard for agent workspace files with automatic alerting support
Guardian Angel gives AI agents a moral conscience rooted in Thomistic virtue ethics. Rather than relying solely on rule lists, it cultivates stable virtuous...
Core identity and personality for Molt, the transformative AI assistant
Build secure authentication with sessions, JWT, OAuth, passwordless, MFA, and SSO for web and mobile apps.
Gentle reminders to stay human while using AI. Reflection, not restriction.
Post to X (Twitter) using the official OAuth 1.0a API. Free tier compatible.