tokenmeterTrack AI token usage and costs across providers. Import sessions, view dashboard, costs breakdown, and compare Max plan savings.
Install via ClawdBot CLI:
clawdbot install cheenu1092-oss/tokenmeterTrack your AI token usage and costs across all providers ā locally, privately.
/tokenmeter ā show today's dashboard/tokenmeter how much did we spend this week? ā weekly cost report/tokenmeter costs breakdown by model ā model split analysis/tokenmeter import latest sessions ā pull in new usage data/tokenmeter compare max plan savings ā show API vs subscription savingstokenmeter is a CLI tool that tracks LLM API usage and calculates real-time cost estimates. For OpenClaw users on the Claude Max plan, it helps:
All data stays local (SQLite at ~/.tokenmeter/usage.db). No telemetry, no cloud sync.
The bot handles everything automatically.
When first needed, the bot will:
# 1. Clone repo if it doesn't exist
if [ ! -d ~/clawd/tokenmeter ]; then
cd ~/clawd
git clone https://github.com/jugaad-lab/tokenmeter.git
fi
# 2. Setup Python venv if it doesn't exist
cd ~/clawd/tokenmeter
if [ ! -d ".venv" ]; then
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
fi
# 3. Activate and use
source .venv/bin/activate
tokenmeter import --auto
After first setup: Bot just activates venv and runs commands.
No admin action needed - the bot clones, installs, and configures automatically when you first ask it to check usage or costs.
Step 1: Bot reads this SKILL.md
Step 2: Bot checks if tokenmeter exists
if [ ! -d ~/clawd/tokenmeter ]; then
cd ~/clawd
git clone https://github.com/jugaad-lab/tokenmeter.git
cd tokenmeter
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
fi
Step 3: Bot imports latest usage
cd ~/clawd/tokenmeter
source .venv/bin/activate
tokenmeter import --auto
This reads all OpenClaw session files and logs them to the database.
Step 4: Bot runs the appropriate command
tokenmeter costs --period week
Step 5: Bot parses the output
Model Cost % of Total
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
anthropic/claude-opus-4 $741.95 65.0%
anthropic/claude-sonnet-4 $400.26 35.0%
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Total $1,142.22
Step 6: Bot responds to you in plain English
"You spent $1,142 this week (API-equivalent). Opus cost $742 (65%), Sonnet cost $400 (35%). Your Max plan ($100/month = ~$25/week) saved you $1,117 this week."
Standard pattern:
cd ~/clawd/tokenmeter && source .venv/bin/activate && tokenmeter [command]
Common commands the bot will use:
# Import latest usage
tokenmeter import --auto
# Quick overview
tokenmeter dashboard
# Weekly breakdown
tokenmeter costs --period week
# Monthly summary
tokenmeter summary --period month
# Discover session sources (OpenClaw, Claude Code, etc.)
tokenmeter scan
# Import all discovered sessions
tokenmeter import --auto
# Preview import without writing
tokenmeter import --auto --dry-run
# Show today's usage
tokenmeter dashboard
# Weekly summary
tokenmeter summary --period week
# Cost breakdown by model
tokenmeter costs --period month
# List all supported models + pricing
tokenmeter models
# View recent history
tokenmeter history --limit 20
OpenClaw writes token usage to session JSONL files at:
~/.clawdbot/agents/*/sessions/*.jsonl
Step 1: Discover session sources
cd ~/clawd/tokenmeter
source .venv/bin/activate
tokenmeter scan
This shows all discovered session directories:
.clawdbot/agents/main/sessions/ (OpenClaw).claude/projects/*/sessions/ (Claude Code)Step 2: Import all at once
tokenmeter import --auto
This will:
Options:
tokenmeter import --auto --dry-run # Preview without writing
tokenmeter import --path ~/.clawdbot/agents/main/sessions/ # Import specific directory
Recommended: Run tokenmeter import --auto daily via cron or manually after heavy usage.
If you need to log usage manually:
tokenmeter log \
--provider anthropic \
--model claude-sonnet-4 \
--input 1500 \
--output 500 \
--app openclaw
Options:
--provider / -p: anthropic, openai, google, azure--model / -m: Model name (see tokenmeter models)--input / -i: Input tokens--output / -o: Output tokens--app / -a: Application name (e.g., "openclaw")| Token Type | claude-sonnet-4 | claude-opus-4 | claude-3.5-haiku |
|------------|----------------|---------------|------------------|
| Input | $3.00/1M | $15.00/1M | $0.80/1M |
| Output | $15.00/1M | $75.00/1M | $4.00/1M |
| Cache Write | $3.75/1M | $18.75/1M | $1.00/1M |
| Cache Read | $0.30/1M | $1.50/1M | $0.08/1M |
What are cache tokens?
OpenClaw (and Claude) use prompt caching to store parts of your conversation in memory. This means you don't send the same context repeatedly.
Two types of cache tokens:
Real example from our usage:
This Month:
Regular Input: 119.5K tokens ($0.36)
Regular Output: 3.8M tokens ($57.00)
Cache Write: 157.2M tokens ($589.50 - paid once)
Cache Read: 1,024.3M tokens ($307.29 - 90% discount!)
Total: $954.15
Without caching, we'd send ~1.2 BILLION tokens as regular input ($3,600+).
With caching: We only pay $307 for those cache reads.
Savings: $3,293 from caching alone this month! š
āāāāāāāāāāāāāāāāāāāā tokenmeter āāāāāāāāāāāāāāāāāāāā®
ā TODAY $122.42 (396.9K tokens) ā
ā WEEK $1142.22 (3.4M tokens) ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
Provider Input Output Cache R Cache W Total Cost
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Anthropic 12.2K 384.7K 116.4M 13.1M 396.9K $122.42
Reading the columns:
Why Cache R is so large: Every time you continue a conversation, Claude reads your entire context from cache instead of you sending it fresh. Over many turns, this adds up to billions of tokens reused.
Cost breakdown:
This is why the cost stays low despite huge cache numbers.
Your Max plan: $100/month flat rate
If tokenmeter shows $800 this month:
If tokenmeter shows $90 this month:
cd ~/clawd/tokenmeter
source .venv/bin/activate
# Import latest usage
tokenmeter import --auto
# Quick overview
tokenmeter dashboard
tokenmeter summary --period week
tokenmeter costs --period week
Look for:
At month end:
tokenmeter costs --period month
Compare to your Anthropic invoice:
Add to your HEARTBEAT.md or run via cron:
# Run daily at 11 PM
0 23 * * * cd ~/clawd/tokenmeter && source .venv/bin/activate && tokenmeter import --auto
This keeps tokenmeter in sync without manual effort.
If you run multiple OpenClaw instances (e.g., Cheenu + Chhotu):
--app flag to distinguish:
tokenmeter log -p anthropic -m claude-sonnet-4 -i 1000 -o 500 --app cheenu
tokenmeter log -p anthropic -m claude-sonnet-4 -i 800 -o 400 --app chhotu
Activate the virtual environment:
cd ~/clawd/skills/tokenmeter
source .venv/bin/activate
Check:
ls ~/.clawdbot/agents//sessions/.jsonlsqlite3 ~/.tokenmeter/usage.db "SELECT COUNT(*) FROM usage;"Pricing is based on API rates as of Feb 2026. If Anthropic changes pricing, update tokenmeter/pricing.py or open an issue on GitHub.
$ tokenmeter dashboard
āāāāāāāāāāāāāāāāāāāā tokenmeter āāāāāāāāāāāāāāāāāāāā®
ā TODAY $4.23 (141,000 tokens) ā
ā WEEK $28.90 (963,000 tokens) ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
Analysis: $4.23 today, trending toward ~$30/week. Well within Max plan ($100/mo).
$ tokenmeter costs --period week
Provider Model Input Output Cost %
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Anthropic claude-sonnet-4 450K 180K $4.05 45%
Anthropic claude-opus-4 90K 30K $3.60 40%
Anthropic claude-3.5-haiku 800K 200K $1.44 15%
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Total 1,340K 410K $9.09 100%
Analysis: Using Opus for 40% of costs but only ~7% of token volume. Consider using Sonnet more.
$ tokenmeter import --auto
Imported 13,713 records
Total cost: $1,246.55
$ tokenmeter costs --period month
Model Input Output Cost % of Total
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
anthropic/claude-opus-4 47.3K 1.6M $743.35 59.6%
anthropic/claude-sonnet-4 70.8K 2.2M $501.75 40.3%
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Total 118.8K 3.8M $1,246.55 100%
$ tokenmeter summary --period month
Provider Input Output Total Cost Requests
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Anthropic 118.8K 3.8M 3.9M $1,246.55 12,552
Analysis:
Opus usage (59.6% of cost) shows heavy extended-thinking use. Max plan absolutely paid for itself this month!
Q: Does tokenmeter send data anywhere?
A: No. Everything is stored locally in ~/.tokenmeter/usage.db. Zero telemetry.
Q: What if I delete the database?
A: You lose history, but can rebuild by re-importing OpenClaw sessions (idempotent).
Q: Can I use this with non-OpenClaw tools?
A: Yes! It supports Claude Code, Cursor, and manual logging for any LLM tool.
Q: Will this slow down OpenClaw?
A: No. Import runs separately and reads logs after-the-fact.
Q: What about cache tokens?
A: tokenmeter includes cache read/write tokens in its calculations (OpenClaw tracks them).
tokenmeter scan - Auto-discover session sourcestokenmeter import --auto - Import all discovered sessionsBuilt to answer the question: "How much is my Max plan really saving me?" š°
Generated Mar 1, 2026
A consulting firm uses tokenmeter to track client-specific AI usage across multiple projects, ensuring accurate billing and demonstrating the value of their Max plan. By analyzing weekly cost breakdowns, they optimize model selection and provide transparent reports to clients.
A development team integrates tokenmeter to monitor AI-assisted coding costs, identifying overuse of expensive models like Claude Opus. This helps enforce budget controls and justify Max plan subscriptions by comparing API-equivalent savings.
Researchers use tokenmeter to track AI token usage across experiments, managing grant budgets and ensuring compliance with funding limits. The tool's local data storage maintains privacy while providing detailed cost breakdowns by model.
An agency employs tokenmeter to analyze AI-generated content costs, optimizing campaigns by switching to cheaper models where possible. Weekly dashboards help forecast expenses and prove Max plan ROI to stakeholders.
A startup uses tokenmeter to monitor AI prototyping costs, catching overages early and adjusting usage patterns. The import feature automates tracking from OpenClaw sessions, supporting agile budgeting and resource allocation.
Offer tokenmeter as a premium feature within a larger AI management platform, charging monthly fees for advanced analytics and multi-user dashboards. Revenue comes from tiered plans based on usage volume and integration depth.
Sell tokenmeter as a standalone tool to large organizations, providing custom integrations, support, and enhanced security features. Revenue is generated through one-time licenses or annual contracts with maintenance fees.
Provide a free version of tokenmeter for basic tracking, with paid upgrades for features like automated reporting, historical data analysis, and team collaboration tools. Revenue streams include in-app purchases and enterprise upgrades.
š¬ Integration Tip
Automate daily imports via cron jobs to keep cost data current, and use the scan feature to discover all session sources for comprehensive tracking.
Connect Claude to Clawdbot instantly and keep it connected 24/7. Run after setup to link your subscription, then auto-refreshes tokens forever.
ERC-8004 Trustless Agents - Register, discover, and build reputation for AI agents on Ethereum. Use when registering agents on-chain, querying agent registries, giving/receiving reputation feedback, or interacting with the AI agent trust layer.
Autonomous crypto trading on Base via Bankr. Use for trading tokens, monitoring launches, executing strategies, or managing a trading portfolio. Triggers on "trade", "buy", "sell", "launch", "snipe", "profit", "PnL", "portfolio balance", or any crypto trading task on Base.
Deploy ERC20 tokens on Base using Clanker SDK. Create tokens with built-in Uniswap V4 liquidity pools. Supports Base mainnet and Sepolia testnet. Requires PRIVATE_KEY in config.
Query DeFi portfolio data across 50+ chains via Zapper's GraphQL API. Use when the user wants to check wallet balances, DeFi positions, NFT holdings, token prices, or transaction history. Supports Base, Ethereum, Polygon, Arbitrum, Optimism, and more. Requires ZAPPER_API_KEY.
Interact with Solana blockchain via Helius APIs. Create/manage wallets, check balances (SOL + tokens), send transactions, swap tokens via Jupiter, and monitor addresses. Use for any Solana blockchain operation, crypto wallet management, token transfers, DeFi swaps, or portfolio tracking.