proactive-tasksProactive goal and task management system. Use when managing goals, breaking down projects into tasks, tracking progress, or working autonomously on objectives. Enables agents to work proactively during heartbeats, message humans with updates, and make progress without waiting for prompts.
Install via ClawdBot CLI:
clawdbot install ImrKhn03/proactive-tasksA task management system that transforms reactive assistants into proactive partners who work autonomously on shared goals.
Instead of waiting for your human to tell you what to do, this skill lets you:
When your human mentions a goal or project:
python3 scripts/task_manager.py add-goal "Build voice assistant hardware" \
--priority high \
--context "Replace Alexa with custom solution using local models"
python3 scripts/task_manager.py add-task "Build voice assistant hardware" \
"Research voice-to-text models" \
--priority high
python3 scripts/task_manager.py add-task "Build voice assistant hardware" \
"Compare Raspberry Pi vs other hardware options" \
--depends-on "Research voice-to-text models"
Check what to work on next:
python3 scripts/task_manager.py next-task
This returns the highest-priority task you can work on (no unmet dependencies, not blocked).
python3 scripts/task_manager.py complete-task <task-id> \
--notes "Researched Whisper, Coqui, vosk. Whisper.cpp looks best for Pi."
When you complete something important or get blocked:
python3 scripts/task_manager.py mark-needs-input <task-id> \
--reason "Need budget approval for hardware purchase"
Then message your human with the update/question.
Proactive Tasks v1.2.0 includes battle-tested patterns from real agent usage to prevent data loss, survive context truncation, and maintain reliability under autonomous operation.
The Problem: Agents write to memory files, then context gets truncated. Changes vanish.
The Solution: Log critical changes to memory/WAL-YYYY-MM-DD.log BEFORE modifying task data.
How it works:
mark-progress, log-time, or status change creates a WAL entry firstEvents logged:
PROGRESS_CHANGE: Task progress updates (0-100%)TIME_LOG: Actual time spent on tasksSTATUS_CHANGE: Task state transitions (blocked, completed, etc.)HEALTH_CHECK: Self-healing operationsAutomatically enabled - no configuration needed. WAL files are created in memory/ directory.
The Concept: Chat history is a BUFFER, not storage. SESSION-STATE.md is your "RAM" - the ONLY place task details are reliably preserved.
Auto-updated on every task operation:
## Current Task
- **ID:** task_abc123
- **Title:** Research voice models
- **Status:** in_progress
- **Progress:** 75%
- **Time:** 45 min actual / 60 min estimate (25% faster)
## Next Action
Complete research, document findings in notes, mark complete.
Why this matters: After context compaction, you can read SESSION-STATE.md and immediately know:
The Problem: Between 60% and 100% context usage, you're in the "danger zone" - compaction could happen any time.
The Solution: Automatically append all task updates to working-buffer.md.
How it works:
# Every progress update, time log, or status change appends:
- PROGRESS_CHANGE (2026-02-12T10:30:00Z): task_abc123 ā 75%
- TIME_LOG (2026-02-12T10:35:00Z): task_abc123 ā +15 min
- STATUS_CHANGE (2026-02-12T10:40:00Z): task_abc123 ā completed
After compaction: Read working-buffer.md to see exactly what happened during the danger zone.
Manual flush: python3 scripts/task_manager.py flush-buffer to copy buffer contents to daily memory file.
Agents make mistakes. Task data can get corrupted over time. The health-check command detects and auto-fixes common issues:
python3 scripts/task_manager.py health-check
Detects 5 categories of issues:
completed_atAuto-fixes 4 safe categories (time anomalies just flagged for human review).
When to run:
These four patterns work together to create a robust system:
User request ā WAL log ā Update data ā Update SESSION-STATE ā Append to buffer
ā ā ā ā ā
Context cut? ā Read WAL ā Verify data ā Check SESSION-STATE ā Review buffer
Result: You never lose work, even during context truncation. The system self-heals and maintains consistency autonomously.
Trigger: Session starts with tag, or you're asked "where were we?" or "continue".
The Problem: Context was truncated. You don't remember what task you were working on.
Recovery Steps (in order):
working-buffer.md - Raw danger zone exchanges
# Check if buffer exists and has recent content
cat working-buffer.md
SESSION-STATE.md - Active task state
# Get current task context
cat SESSION-STATE.md
# See what operations happened
cat memory/WAL-$(date +%Y-%m-%d).log | tail -20
python3 scripts/task_manager.py list-tasks "Goal Title"
Do NOT ask "what were we discussing?" - The buffer and SESSION-STATE literally have the answer.
The Law: "Code exists" ā "feature works." Never report task completion without end-to-end verification.
Trigger: About to mark a task completed or say "done":
Examples:
ā Wrong: "Added health-check command. Task complete!"
ā Right: "Added health-check. Testing... detected 4 issues, auto-fixed 3. Verified on broken test data. Task complete!"
ā Wrong: "Implemented SESSION-STATE updates. Done!"
ā Right: "Implemented SESSION-STATE. Tested with mark-progress, log-time, mark-blocked - all update correctly. Done!"
Why this matters: Agents often report completion based on "I wrote the code" rather than "I verified it works." VBR prevents false completions and builds trust.
The Core Question: Don't ask "what should I do?" Ask "what would genuinely help my human that they haven't thought to ask for?"
During heartbeats, you have the opportunity to make real progress:
The transformation: From waiting for prompts ā making steady autonomous progress on shared goals.
DO message your human when:
DON'T spam with:
The goal: Be a proactive partner who makes things happen, not a chatty assistant who needs constant validation.
| State | Meaning |
|-------|---------|
| pending | Ready to work on (all dependencies met) |
| in_progress | Currently working on it |
| blocked | Can't proceed (dependencies not met) |
| needs_input | Waiting for human input/decision |
| completed | Done! |
| cancelled | No longer relevant |
Proactive Tasks supports two distinct operational modes:
| Mode | Context | Trigger | Best For | Risk |
|------|---------|---------|----------|------|
| Interactive (systemEvent) | Full main session context | User request, manual prompts | Decision-making, human-facing work | Full context available |
| Autonomous (isolated agentTurn) | No main session context | Heartbeat cron, scheduled background | Velocity reports, cleanup, recurring tasks | May lose context |
Don't use systemEvent for background work. When a cron job fires during your main session, the prompt gets queued and work doesn't happen. Instead:
This ensures background tasks never interrupt your main conversation.
See HEARTBEAT-CONFIG.md for complete autonomous operation patterns, including:
To enable autonomous proactive work, you need to set up a heartbeat system. This tells you to periodically check for tasks and work on them.
Quick setup: See HEARTBEAT-CONFIG.md for complete setup instructions and patterns.
TL;DR:
HEARTBEAT.mdYour cron job should send this message every 30 minutes:
š Heartbeat check: Read HEARTBEAT.md if it exists (workspace context). Follow it strictly. Do not infer or repeat old tasks from prior chats. If nothing needs attention, reply HEARTBEAT_OK.
Add this to your workspace HEARTBEAT.md:
## Proactive Tasks (Every heartbeat) š
Check if there's work to do on our goals:
- [ ] Run `python3 skills/proactive-tasks/scripts/task_manager.py next-task`
- [ ] If a task is returned, work on it for up to 10-15 minutes
- [ ] Update task status when done, blocked, or needs input
- [ ] Message your human with meaningful updates (completions, blockers, discoveries)
- [ ] Don't spam - only message for significant milestones or when stuck
**Goal:** Make autonomous progress on our shared objectives without waiting for prompts.
Every 30 minutes:
āā Heartbeat fires
āā You read HEARTBEAT.md
āā Check for next task
āā If task found ā work on it, update status, message human if needed
āā If nothing ā reply "HEARTBEAT_OK" (silent)
The transformation: You go from reactive (waiting for prompts) to proactive (making steady autonomous progress).
Break goals into tasks that are:
ā Do message when:
ā Don't spam with:
If a task turns out to be bigger than expected:
in_progressAll data stored in data/tasks.json:
{
"goals": [
{
"id": "goal_001",
"title": "Build voice assistant hardware",
"priority": "high",
"context": "Replace Alexa with custom solution",
"created_at": "2026-02-05T05:25:00Z",
"status": "active"
}
],
"tasks": [
{
"id": "task_001",
"goal_id": "goal_001",
"title": "Research voice-to-text models",
"priority": "high",
"status": "completed",
"created_at": "2026-02-05T05:26:00Z",
"completed_at": "2026-02-05T06:15:00Z",
"notes": "Researched Whisper, Coqui, vosk. Whisper.cpp best for Pi."
}
]
}
See CLI_REFERENCE.md for complete command documentation.
Before proposing new features, evaluate them using our VFM/ADL scoring frameworks to ensure stability and value:
Score across four dimensions:
Threshold: Must score ā„60 points to proceed.
Priority ordering: Stability > Explainability > Reusability > Scalability > Novelty
Forbidden Evolution:
The Golden Rule: "Does this let future-me solve more problems with less cost?" If no, skip it.
Day 1:
Human: "Let's build a custom voice assistant to replace Alexa"
Agent: *Creates goal, breaks into initial research tasks*
During heartbeat:
$ python3 scripts/task_manager.py next-task
ā task_001: Research voice-to-text models (priority: high)
# Agent works on it, completes research
$ python3 scripts/task_manager.py complete-task task_001 --notes "..."
Agent messages human:
"Hey! I finished researching voice models. Whisper.cpp looks perfect for Raspberry Pi - runs locally, good accuracy, low latency. Want me to compare hardware options next?"
Day 2:
Human: "Yeah, compare Pi 5 vs alternatives"
Agent: *Adds task, works on it during next heartbeat*
This cycle continues - the agent makes steady autonomous progress while keeping the human in the loop for decisions and updates.
Built by Toki for proactive AI partnership š
Generated Mar 1, 2026
An AI agent autonomously manages literature review tasks for a PhD student. It breaks down the research project into subtasks like finding relevant papers, summarizing key findings, and tracking citation counts, working during heartbeats and sending weekly progress updates.
A marketing team uses the skill to manage blog post creation from ideation to publication. The agent tracks goals like 'Q3 content calendar', breaks them into tasks (outline, draft, edit, publish), and proactively messages team members when approvals are needed or deadlines approach.
In an agile development environment, the agent manages sprint backlogs by converting user stories into technical tasks with dependencies. It autonomously tracks progress during development cycles, logs time spent on features, and flags blockers to the engineering lead via proactive messages.
For organizing a corporate conference, the agent breaks down the event into logistical tasks like venue booking, speaker coordination, and marketing. It works autonomously on research tasks during heartbeats, uses WAL logging to prevent data loss, and messages planners with vendor updates or budget questions.
An individual uses the skill to manage personal goals like learning a new language or fitness tracking. The agent creates tasks with priorities and dependencies, works on flashcards or schedule adjustments during heartbeats, and sends motivational updates or reminders to the user.
Offer the proactive task management system as a cloud-based SaaS with tiered pricing (e.g., $10/user/month). Revenue comes from subscriptions for teams needing autonomous project coordination, with premium features like advanced analytics or API integrations.
License the skill to large enterprises for embedding into existing workflow tools (e.g., Slack, Jira). Revenue is generated through one-time licensing fees or annual contracts, targeting industries like software development or consulting that require robust, self-healing task systems.
Provide a free version for individual users with basic task management, monetizing through paid upgrades for advanced features like WAL recovery protocols, health-check automation, or priority support. Revenue streams include in-app purchases and premium subscriptions.
š¬ Integration Tip
Integrate with existing calendar or communication tools (e.g., Slack, email) to automate update messages and sync deadlines, ensuring the agent's proactive updates align with user workflows without manual intervention.
Manage Trello boards, lists, and cards via the Trello REST API.
Sync and query CalDAV calendars (iCloud, Google, Fastmail, Nextcloud, etc.) using vdirsyncer + khal. Works on Linux.
Manage tasks and projects in Todoist. Use when user asks about tasks, to-dos, reminders, or productivity.
Master OpenClaw's timing systems. Use for scheduling reliable reminders, setting up periodic maintenance (janitor jobs), and understanding when to use Cron v...
Calendar management and scheduling. Create events, manage meetings, and sync across calendar providers.
Kanban-style task management dashboard for AI assistants. Manage tasks via CLI or dashboard UI. Use when user mentions tasks, kanban, task board, mission con...