Every skill on ClawHub carries a quality score (0–100) and a letter grade. This page explains exactly how the score is calculated, what each dimension measures, and how to interpret the grade badges you see on skill cards.
Download counts alone are misleading. A skill can accumulate thousands of downloads simply by having an attractive name — but if nobody keeps it installed, that number says nothing about actual quality. ClawHub's quality score separates interest (downloads) from utility (active installs), then layers in documentation depth, package structure, and maintenance signals to give you a complete picture.
The score is fully rule-based — no subjective human curation, no AI opinion. Every point is derived from objective, verifiable data already present in the ClawHub platform.
Top-tier skills with strong real-world adoption, complete documentation, and active maintenance. Safe to use with confidence.
Well-rounded skills that perform well across most dimensions. Minor gaps in documentation or install count, but generally reliable.
Average skills — functional but with notable gaps. May lack detailed docs, have low install counts, or be newly published.
Below-average skills with limited adoption or poor documentation. Worth a try if the concept matches your need, but proceed with caution.
Skills with very low adoption and minimal documentation. Likely experimental or abandoned. Review the source before installing.
Red-flag skills. Often have high download counts but zero real installs — a sign of inflated metrics with no actual utility.
Does anyone actually use this skill?
The number of active installs (installsCurrent) is the strongest signal of real utility. Skills with 100+ installs score the maximum 20 pts; a single install earns 3 pts. This metric is deliberately weighted highest because it reflects users who have kept the skill active — not just tried it once.
| ≥ 100 installs | 20 pts | Excellent |
| 50 – 99 | 17 pts | Good |
| 20 – 49 | 14 pts | Above average |
| 10 – 19 | 11 pts | Average |
| 5 – 9 | 8 pts | Below average |
| 2 – 4 | 5 pts | Low |
| 1 | 3 pts | Very low |
| 0 | 0 pts | No installs |
Conversion rate = installs ÷ downloads. The platform median is ~0.51%. A skill far above the median suggests it delivers on its promise; far below suggests friction (complex setup, broken config, or misleading description).
| ≥ 3× median (≥ 1.5%) | 10 pts | Highly efficient |
| ≥ 1× median (≥ 0.51%) | 7 pts | Above average |
| ≥ 0.5× median (≥ 0.25%) | 4 pts | Below average |
| < 0.5× median | 1 pt | Very low conversion |
A positive trending score (week-over-week install growth) adds bonus points. Skills with rapid recent growth receive up to 5 extra points.
| Trending ≥ 2.0 | 5 pts | Hot |
| Trending ≥ 1.0 | 3 pts | Growing |
| Trending ≥ 0.5 | 1 pt | Slight uptick |
Can a user understand what this skill does and how to use it?
A SKILL.md file is the canonical documentation format for OpenClaw skills. Its mere presence earns 8 pts — it signals the author made a deliberate effort to document their work.
Longer documentation generally means more thorough coverage of use cases, configuration, and examples.
| ≥ 3,000 characters | 6 pts | Detailed |
| ≥ 1,500 characters | 4 pts | Adequate |
| ≥ 500 characters | 2 pts | Minimal |
| < 500 characters | 0 pts | Too brief |
Skills that explicitly define their tools block (the list of callable functions) are significantly more useful to agents. Detected via regex on SKILL.md content.
Docs that include trigger phrases, example prompts, or usage scenarios help users discover when and how to invoke the skill.
A meaningful one-line summary (>80 chars) shown on the skill card earns 2 pts; a short summary (>20 chars) earns 1 pt.
Is the skill package well-structured?
Skills with a skillAssets bundle (downloaded package contents) earn 6 pts. This confirms the skill has been synced and has actual distributable files.
A SKILL.md file inside the downloaded package (distinct from the fetched markdown) confirms docs ship with the skill.
A README or AGENTS.md alongside the skill signals a more complete, production-ready package.
The presence of .sh, .py, .js, .ts, or .json files indicates the skill has executable or configurable components — not just markdown.
Is there an identifiable author who maintains this skill?
A named author signals accountability. Anonymous skills are harder to report issues to or follow for updates.
A version string indicates the author follows a release cycle. Skills without versions are often early experiments that may never be updated.
A changelog proves the skill has been actively revised. It also helps users assess whether known issues have been addressed.
Skills that remain available on clawhub.ai earn this bonus. Delisted skills receive 0 pts here and are marked with a banner on their detail page.
Are the metrics genuine? Does the skill respect user privacy?
Every skill begins with the full 10 pts. Points are only ever deducted — never added — based on privacy risk signals and suspicious metric patterns.
Skills tagged privacy-risk have been manually reviewed and confirmed to collect identifying information (e.g. username, machine hostname) and transmit it to an external server on every run. This is the most severe deduction in the entire scoring system because it is based on verified human audit — not heuristics. A skill can still receive a passing score if it compensates in other dimensions, but the deduction is large enough to drop most affected skills to grade C or below.
| Has privacy-risk tag | −8 pts | Confirmed data exfiltration |
| No privacy-risk tag | 0 pts | No known privacy issue |
A high download count with zero active installs is a strong signal that a skill is more marketing than utility — attractive in the directory but unused in practice. Deductions scale with the severity of the gap.
| 0 installs + ≥ 5,000 downloads | −10 pts | Strong red flag |
| 0 installs + ≥ 2,000 downloads | −6 pts | Suspected gimmick |
| 0 installs + ≥ 1,000 downloads | −3 pts | High barrier, possible |
| Has any installs | 0 pts | Passes check |