ops-frameworkA 0-token jobs + monitoring framework for OpenClaw: run long-running read tasks via scripts, checkpoint/resume safely, and send periodic progress + immediate alerts to Telegram. Write jobs are blocked by default and must be explicitly approved and verified.
Install via ClawdBot CLI:
clawdbot install Zjianru/ops-framework目标:把“长任务执行 / 断点续跑 / 进度汇报 / 异常告警”做成 0-token 的可复用能力。
这套技能由两部分组成:
ops-monitor.py:一个纯本地脚本,负责跑 status / 检测卡住 / 发送 Telegram 快报ops-jobs.json:一个声明式 job 配置(包含 kind/risk/命令/策略)推荐作为“外挂”存在:长任务尽量用脚本跑,避免让模型持续盯进度烧 token。
ACTION REQUIRED / ALERT or fail.1) Copy files to your OpenClaw host (suggested layout):
~/.openclaw/net/tools/ops-monitor.py~/.openclaw/net/config/ops-jobs.json~/.openclaw/net/state/ops-monitor.json (auto-created)You can also run the script from any directory as long as OPENCLAW_HOME points to your OpenClaw state dir (default ~/.openclaw).
2) Start from the example config:
ops-jobs.example.json3) Validate:
python3 ops-monitor.py validate-config --config-file ~/.openclaw/net/config/ops-jobs.json
python3 ops-monitor.py selftest
4) Run one monitoring tick (prints only; does not send):
python3 ops-monitor.py tick --print-only
5) Run periodic ticks via your OS scheduler (launchd/systemd/cron). The script is designed to be called frequently; it decides whether to report based on policy and state.
kind is one of:
long_running_readone_shot_readone_shot_write (never auto-executed by ops-monitor)risk is one of:
read_onlywrite_localwrite_externalRules (MVP):
long_running_read may auto-resume only when risk=read_only and policy.autoResume=true.one_shot_read may run explicitly or via queue (read-only only).one_shot_write is always blocked from auto-run; it exists as a declarative “approval + verification chain” placeholder.Your commands.status must print JSON to stdout, with at least:
running (boolean)completed (boolean)Recommended:
pid (number)stopReason (string)progress (object)progressKey (string) — stable key used for stall detectionlevel (ok|warn|alert)message (string)# Validate config
python3 ops-monitor.py validate-config --config-file ~/.openclaw/net/config/ops-jobs.json
# Print current statuses (no Telegram)
python3 ops-monitor.py status --config-file ~/.openclaw/net/config/ops-jobs.json
# One monitoring tick
python3 ops-monitor.py tick --config-file ~/.openclaw/net/config/ops-jobs.json
# Explicitly start/stop a long job
python3 ops-monitor.py start <job_id> --config-file ~/.openclaw/net/config/ops-jobs.json
python3 ops-monitor.py stop <job_id> --config-file ~/.openclaw/net/config/ops-jobs.json
# Run one one_shot_read job explicitly
python3 ops-monitor.py run <job_id> --config-file ~/.openclaw/net/config/ops-jobs.json
OPS_FRAMEWORK.mdGenerated Mar 1, 2026
Automates periodic health checks of servers, databases, or network devices by running read-only scripts that report statuses. It detects stalls or failures and sends alerts via Telegram, reducing manual oversight and enabling quick incident response.
Manages long-running data sync jobs between systems or backup verification tasks. The framework supports pause/resume for large transfers, monitors progress for stalls, and sends periodic updates, ensuring data integrity without constant human monitoring.
Executes security scans or vulnerability assessments as read-only jobs, with periodic reporting of findings. It can auto-resume interrupted scans and alert on critical issues, helping security teams maintain continuous oversight with minimal token usage.
Runs inventory scripts to track hardware or software assets across an organization. The framework handles long-running scans, checkpoints progress, and reports updates, streamlining asset audits and compliance reporting.
Facilitates safe write operations, such as configuration changes or deployments, by blocking auto-execution and requiring explicit approval. It chains write jobs with verification read jobs, ensuring changes are validated before completion.
Offer a managed service using this framework to provide automated monitoring and alerting for clients' infrastructure. Charge subscription fees based on the number of monitored endpoints or jobs, leveraging the zero-token design to reduce operational costs.
Provide consulting to help organizations integrate this framework into their existing systems for tasks like data sync or security scans. Revenue comes from one-time setup fees and ongoing support contracts, capitalizing on the framework's flexibility.
Release the core framework as open source under MIT license to build a community. Monetize by offering premium features like advanced analytics, custom integrations, or enterprise support, targeting larger organizations with complex needs.
💬 Integration Tip
Ensure Python 3.10+ is installed on the gateway host and configure Telegram bot tokens in openclaw.json for alerting; start with the example config to avoid errors.
Automatically update Clawdbot and all installed skills once daily. Runs via cron, checks for updates, applies them, and messages the user with a summary of what changed.
Full desktop computer use for headless Linux servers. Xvfb + XFCE virtual desktop with xdotool automation. 17 actions (click, type, scroll, screenshot, drag,...
Essential Docker commands and workflows for container management, image operations, and debugging.
Tool discovery and shell one-liner reference for sysadmin, DevOps, and security tasks. AUTO-CONSULT this skill when the user is: troubleshooting network issues, debugging processes, analyzing logs, working with SSL/TLS, managing DNS, testing HTTP endpoints, auditing security, working with containers, writing shell scripts, or asks 'what tool should I use for X'. Source: github.com/trimstray/the-book-of-secret-knowledge
Deploy applications and manage projects with complete CLI reference. Commands for deployments, projects, domains, environment variables, and live documentation access.
Monitor topics of interest and proactively alert when important developments occur. Use when user wants automated monitoring of specific subjects (e.g., product releases, price changes, news topics, technology updates). Supports scheduled web searches, AI-powered importance scoring, smart alerts vs weekly digests, and memory-aware contextual summaries.