oktkLLM Token Optimizer - Reduce AI API costs by 60-90%. Compresses CLI outputs (git, docker, kubectl) before sending to GPT-4/Claude. AI auto-learning included. By Buba Draugelis π±πΉ
Install via ClawdBot CLI:
clawdbot install satnamra/oktkWhen you run commands through an AI assistant, the full output goes into the LLM context:
$ git status
# Returns 60+ lines, ~800 tokens
# Your AI reads ALL of it, you pay for ALL of it
Every token costs money. Verbose outputs waste your context window.
oktk sits between your commands and the LLM, compressing outputs intelligently:
ββββββββββββ ββββββββββββ ββββββββββββ
β Command β βββΊ β oktk β βββΊ β LLM β
β (800 tk) β β compress β β (80 tk) β
ββββββββββββ ββββββββββββ ββββββββββββ
β
90% SAVED
Automatically when you run supported commands through OpenClaw:
| Command | What oktk does | Savings |
|---------|----------------|:-------:|
| git status | Shows only: branch, ahead/behind, file counts | 90% |
| git log | One line per commit: hash + message + author | 85% |
| git diff | Summary: X files, +Y/-Z lines, file list | 80% |
| npm test | Just: β
passed or β failed + count | 98% |
| ls -la | Groups by type, shows sizes, skips details | 83% |
| curl | Status code + key headers + truncated body | 97% |
| grep | Match count + first N matches | 80% |
| docker ps | Container list: name, image, status | 85% |
| docker logs | Last N lines + error count | 90% |
| kubectl get pods | Pod status summary with counts | 85% |
| kubectl logs | Last N lines + error/warning counts | 90% |
| Any command | AI learns patterns automatically (optional) | ~70% |
On branch main
Your branch is ahead of 'origin/main' by 3 commits.
(use "git push" to publish your local commits)
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: src/components/Button.jsx
modified: src/components/Header.jsx
modified: src/utils/format.js
modified: src/utils/validate.js
modified: package.json
modified: package-lock.json
Untracked files:
(use "git add <file>..." to include in what will be committed)
src/components/Footer.jsx
src/components/Sidebar.jsx
tests/Button.test.js
no changes added to commit (use "git add" and/or "git commit -a")
π main
β Ahead 3 commits
βοΈ Modified: 6
β Untracked: 3
Same information. 90% fewer tokens. Same cost savings.
oktk never breaks your workflow:
Try specialized filter
β fails?
Try basic filter
β fails?
Return raw output (same as without oktk)
Worst case: You get normal output
Best case: 90% token savings
After installation, oktk is available globally:
# Pipe any command through oktk
git status | oktk git status
docker ps | oktk docker ps
kubectl get pods | oktk kubectl get pods
# See your total savings
oktk --stats
# Bypass filter (get raw)
oktk --raw git status
Source the aliases file for automatic filtering:
# Add to ~/.zshrc or ~/.bashrc
source ~/.openclaw/workspace/skills/oktk/scripts/oktk-aliases.sh
Then use short aliases:
gst # git status (filtered)
glog # git log (filtered)
dps # docker ps (filtered)
kpods # kubectl get pods (filtered)
# Universal wrapper - filter ANY command
ok git status
ok docker ps -a
ok kubectl describe pod my-pod
When using OpenClaw's exec tool, pipe outputs through oktk:
# In your prompts, ask OpenClaw to:
git status | oktk git status
docker logs container | oktk docker logs
# Or use the 'ok' wrapper (if aliases sourced):
ok git diff HEAD~5
Note: OpenClaw doesn't have a built-in exec output transformer yet.
The recommended approach is:
ok wrapper for any command | oktk After 1 week of normal usage:
π Token Savings
ββββββββββββββββ
Commands filtered: 1,247
Tokens saved: 456,789 (78%)
π° At $0.01/1K tokens = $4.57 saved
Already included in OpenClaw workspace, or:
clawhub install oktk
Made with β€οΈ in Lithuania π±πΉ
AI Usage Analysis
Analysis is being generated⦠refresh in a few seconds.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection