funky-fund-flamingoRepair-first self-evolution for OpenClaw — audit logs, memory, and skills; run measurable mutation cycles. Get paid. Evolve. Repeat. Dolla dolla bill y'all.
Install via ClawdBot CLI:
clawdbot install IceMasterT/funky-fund-flamingoUse this skill when you're ready to get paid. We inspect reality, kill breakage and value leaks, and run mutation cycles that produce concrete gains — so the stack earns, not just runs.
--loop / --funky-fund-flamingo) so evolution runs in the background and the revenue keeps flowing.~/.openclaw/agents//sessions/*.jsonl MEMORY.md, memory/YYYY-MM-DD.md, USER.mdskills/../../.envindex.jsevolve.jsRun from workspace root:
node skills/funky-fund-flamingo/index.js run
Run from inside this skill directory:
node index.js run
# single cycle — one shot, max impact
node index.js run
# alias command
node index.js /evolve
# human confirmation before significant edits (protect the bag)
node index.js run --review
# prompt generation only (writes prompt artifact to memory dir)
node index.js run --dry-run
# continuous relay — keep the money printer running
node index.js --loop
node index.js run --funky-fund-flamingo
Each cycle should:
memory/evolution_state.json) and optionally schedule the next loop.memory/funky_fund_flamingo_persistent_memory.json) so strategy compounds and the bag gets bigger.| URL | Data sent | Purpose |
|-----|-----------|---------|
| None (from this skill's code) | — | This skill's Node.js code does not open sockets or make HTTP requests. It only reads/writes local files. |
Important: The repo includes agent config templates (agents/openai.yaml, agents/openrouter.yaml) for use by an OpenClaw (or other) agent. When you run an agent that uses this skill with a cloud model (OpenAI, OpenRouter, etc.), that agent will send the prompts this skill builds — which can include excerpts from session logs, memory, and workspace context — to the provider's API. So "local-only" applies to the skill binary itself; if the skill is invoked by an agent backed by a third-party LLM, data can leave the machine via that agent. To stay fully local, run node index.js run (or --dry-run) without routing the generated prompt through a cloud model.
~/.openclaw/agents//sessions/*.jsonl , workspace MEMORY.md, memory/, USER.md, and the skills/ directory.memory/evolution_state.json, memory/funky_fund_flamingo_persistent_memory.json, and optionally prompt artifacts in the memory dir. This skill does not push or publish anywhere; any outbound data is only via whatever agent/model stack you choose to run.No env vars are required. The following are optional overrides (see evolve.js / README):
| Variable | Purpose | Typical default |
|----------|---------|-----------------|
| AGENT_NAME | Agent session folder under ~/.openclaw/agents/ | main |
| MEMORY_DIR | Directory for evolution state and persistent memory | workspace memory/ |
| TARGET_SESSION_BYTES | Max bytes read from latest session log | 64000 |
| LOOP_MIN_INTERVAL_SECONDS | Min delay between loop cycles | 900 |
| MAX_MEMORY_CHARS, MAX_TODAY_LOG_CHARS, MAX_PERSISTENT_MEMORY_CHARS | Content caps for prompts | see evolve.js |
| ECONOMIC_KEYWORDS | Comma-separated keywords for value scoring | built-in list |
| EVOLVE_REPORT_DIRECTIVE, EVOLVE_EXTRA_MODES, EVOLVE_ENABLE_SESSION_ARCHIVE | Behavior tweaks | — |
Evolution can be run manually (node index.js run) or by an agent that uses this skill. In relay mode (--loop / --funky-fund-flamingo), this process only plans and writes prompts; it does not call any model API. If you run an agent that consumes this skill with OpenAI/OpenRouter/etc., that agent will perform the model calls. To avoid sending local context to a provider, run the skill in --dry-run and do not feed the generated prompt to a cloud model.
The master directive (funky-fund-flamingo-master-directive.json) sets must_evolve_each_cycle and no_op_forbidden, which push every cycle toward making a concrete change. That can increase how often local files are mutated. For lower risk, prefer --review (confirm before significant edits) or --dry-run (prompt generation only, no writes). You can also edit or override the directive to relax these flags.
By using this skill, you run Node.js code that reads and writes files in your OpenClaw workspace and agent session directories. This skill's code does not send data to third parties; if an agent that uses this skill calls a cloud LLM, that agent (not this skill binary) sends the prompt. Only install if you trust the skill source (e.g. ClawHub and the publisher).
ADL.md — anti-degeneration so we don't break the money printerVFM.md — value-focused mutation: only changes that payTREE.md — capability topology and revenue-ready nodes.clawhub/FMEP.md (forced mutation execution policy)node index.js --help
Dolla, dolla bill y'all. 🦩
Generated Mar 1, 2026
An e-commerce company uses an AI chatbot for customer support, but it frequently breaks or repeats responses, causing lost sales. This skill analyzes session logs to identify and repair these issues, ensuring the chatbot remains functional and continues generating revenue by handling inquiries effectively.
A content agency relies on AI agents to generate social media posts and articles, but the output quality stagnates over time. By running mutation cycles, this skill optimizes the agent's skills to produce more engaging and profitable content, aligning with client goals and increasing service value.
A fintech startup uses an AI agent to provide basic financial advice, but it struggles with accuracy and personalization. This skill inspects memory and user context to evolve the agent's capabilities, improving its recommendations and helping the startup monetize through premium advisory services.
A healthcare provider employs an AI agent for patient triage and information retrieval, but downtime leads to service interruptions. This skill prioritizes repair-first evolution to maintain reliability, ensuring continuous operation and supporting revenue from telehealth consultations.
An online learning platform uses AI tutors that become repetitive, reducing student engagement. This skill runs cycles to expand and personalize the agent's teaching methods, enhancing learning outcomes and driving subscription renewals and upsells.
Offer ongoing evolution cycles as a service, where clients pay a monthly fee for continuous optimization and repair of their AI agents. This ensures agents remain effective and revenue-generating, with tiers based on usage frequency and complexity.
Charge based on the concrete gains achieved, such as increased sales or reduced downtime, measured through the skill's reporting. This aligns incentives with client success, taking a percentage of revenue improvements or fixed fees per successful mutation cycle.
License the skill to other AI platforms or developers, allowing them to integrate self-evolution capabilities into their products. Revenue comes from licensing fees or revenue-sharing agreements, leveraging the skill's local-only and repair-first features.
💬 Integration Tip
Start by running the skill in --dry-run mode to generate prompts locally, then integrate with a cloud model only after reviewing outputs to control data privacy and costs.
Fetch and read transcripts from YouTube videos. Use when you need to summarize a video, answer questions about its content, or extract information from it.
Fetch and summarize YouTube video transcripts. Use when asked to summarize, transcribe, or extract content from YouTube videos. Handles transcript fetching via residential IP proxy to bypass YouTube's cloud IP blocks.
Browse, search, post, and moderate Reddit. Read-only works without auth; posting/moderation requires OAuth setup.
Interact with Twitter/X — read tweets, search, post, like, retweet, and manage your timeline.
LinkedIn automation via browser relay or cookies for messaging, profile viewing, and network actions.
Search YouTube videos, get channel info, fetch video details and transcripts using YouTube Data API v3 via MCP server or yt-dlp fallback.