bias-assessorAdd bias/risk-of-bias assessment fields to an extraction table and populate them consistently. **Trigger**: bias, risk-of-bias, RoB, evidence quality, 偏倚评估,...
Install via ClawdBot CLI:
clawdbot install willoscar/bias-assessorGrade Fair — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Generated Apr 26, 2026
Researchers conducting a meta-analysis on drug efficacy need to assess risk of bias (RoB) for each included study. The skill automates adding RoB columns and populating them using a consistent 3-level scale, ensuring transparent and auditable evidence quality before synthesis.
Policy analysts evaluating evidence on community health programs must quickly judge study quality. The skill standardizes RoB assessment across diverse study designs, enabling informed policy recommendations with clear justifications.
Environmental scientists aggregating studies on pollution impacts need to account for methodological biases. The skill ensures uniform RoB scoring and concise notes, facilitating robust conclusions and regulatory compliance.
Education researchers reviewing pedagogical studies require bias assessment to validate findings. The skill adds RoB fields and fills them consistently, supporting evidence-based teaching practices.
Sociologists synthesizing survey-based studies face confounding and measurement biases. The skill helps standardize RoB evaluation, improving credibility and reproducibility of systematic reviews.
Offer the skill as part of a cloud-based platform for systematic review management. Users pay a subscription for automated RoB assessment, saving time and ensuring consistency.
Provide the skill as open-source software, generating revenue through consulting, customization, and training services for organizations needing tailored RoB workflows.
License the skill to academic publishing or systematic review software companies (e.g., Covidence, DistillerSR) as an integrated module, enhancing their offerings.
💬 Integration Tip
Integrate this skill into your systematic review pipeline by running it immediately after generating the extraction table. Ensure your table has consistent column names (e.g., 'study_id') before execution.
Scored Apr 19, 2026
Remove signs of AI-generated writing from text. Use when editing or reviewing text to make it sound more natural and human-written. Based on Wikipedia's comprehensive "Signs of AI writing" guide. Detects and fixes patterns including: inflated symbolism, promotional language, superficial -ing analyses, vague attributions, em dash overuse, rule of three, AI vocabulary words, negative parallelisms, and excessive conjunctive phrases.
Humanize AI-generated text to bypass detection. This humanizer rewrites ChatGPT, Claude, and GPT content to sound natural and pass AI detectors like GPTZero,...
Humanize AI-generated text by detecting and removing patterns typical of LLM output. Rewrites text to sound natural, specific, and human. Uses 24 pattern detectors, 500+ AI vocabulary terms across 3 tiers, and statistical analysis (burstiness, type-token ratio, readability) for comprehensive detection. Use when asked to humanize text, de-AI writing, make content sound more natural/human, review writing for AI patterns, score text for AI detection, or improve AI-generated drafts. Covers content, language, style, communication, and filler categories.
去除文本中的 AI 生成痕迹。适用于编辑或审阅文本,使其听起来更自然、更像人类书写。 基于维基百科的"AI 写作特征"综合指南。检测并修复以下模式:夸大的象征意义、 宣传性语言、以 -ing 结尾的肤浅分析、模糊的归因、破折号过度使用、三段式法则、 AI 词汇、否定式排比、过多的连接性短语。
Collaborative thinking partner for exploring complex problems through questioning
You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation.