model-usage-skillUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trig...
Install via ClawdBot CLI:
clawdbot install JustAskNudge/model-usage-skillGet per-model usage cost from CodexBar's local cost logs. Supports "current model" (most recent daily entry) or "all models" summaries for Codex or Claude.
TODO: add Linux CLI support guidance once CodexBar CLI install path is documented for Linux.
python {baseDir}/scripts/model_usage.py --provider codex --mode current
python {baseDir}/scripts/model_usage.py --provider codex --mode all
python {baseDir}/scripts/model_usage.py --provider claude --mode all --format json --pretty
modelBreakdowns.modelsUsed when breakdowns are missing.--model when you need a specific model.codexbar cost --format json --provider .codexbar cost --provider codex --format json > /tmp/cost.json
python {baseDir}/scripts/model_usage.py --input /tmp/cost.json --mode all
cat /tmp/cost.json | python {baseDir}/scripts/model_usage.py --input - --mode current
--format json --pretty).references/codexbar-cli.md for CLI flags and cost JSON fields.Generated Mar 1, 2026
A software development team uses multiple AI models like Codex and Claude for code generation and documentation. They need to track per-model costs to optimize usage and allocate budgets across projects, using this skill to generate daily or historical summaries from CodexBar logs.
A freelance AI consultant uses Codex and Claude for client projects and needs to itemize costs by model for accurate billing. This skill helps them extract per-model usage data from CodexBar to create detailed invoices and justify expenses.
A research lab experiments with different AI models for academic studies and must monitor costs per model to manage grant funding. They use this skill to analyze CodexBar data, identifying high-cost models and adjusting experiments for efficiency.
A tech startup leverages AI models for product development and customer support, needing to track usage to control operational costs. This skill provides per-model breakdowns from CodexBar, enabling them to switch models based on cost-effectiveness and performance.
Companies offer AI-powered tools with tiered subscriptions based on model usage, using this skill to monitor costs per model and set pricing plans. It helps ensure profitability by aligning subscription fees with underlying AI provider expenses.
Service providers bundle AI model usage into managed packages for clients, using this skill to track per-model costs and optimize service delivery. Revenue is generated through fixed-fee contracts with margins based on efficient model selection.
Consultants specialize in helping businesses optimize AI usage, using this skill to audit CodexBar logs and recommend cost-saving strategies. Revenue comes from hourly consulting fees or audit packages based on insights from per-model analysis.
💬 Integration Tip
Ensure CodexBar CLI is installed and configured on macOS, and use the provided Python script with command-line arguments for flexible input from files or stdin.
Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Gemini CLI for one-shot Q&A, summaries, and generation.
Research any topic from the last 30 days on Reddit + X + Web, synthesize findings, and write copy-paste-ready prompts. Use when the user wants recent social/web research on a topic, asks "what are people saying about X", or wants to learn current best practices. Requires OPENAI_API_KEY and/or XAI_API_KEY for full Reddit+X access, falls back to web search.
Check Antigravity account quotas for Claude and Gemini models. Shows remaining quota and reset times with ban detection.
Manages free AI models from OpenRouter for OpenClaw. Automatically ranks models by quality, configures fallbacks for rate-limit handling, and updates opencla...
Manages free AI models from OpenRouter for OpenClaw. Automatically ranks models by quality, configures fallbacks for rate-limit handling, and updates openclaw.json. Use when the user mentions free AI, OpenRouter, model switching, rate limits, or wants to reduce AI costs.