Generate detailed images from text prompts using Pollinations.ai models with optional configuration, model selection, and advanced settings.
251 AI agent skills for LLMs & Model APIs. Part of the ๐ค AI & Agents category.
Generate detailed images from text prompts using Pollinations.ai models with optional configuration, model selection, and advanced settings.
Intelligent LLM proxy that routes requests to appropriate models based on complexity. Save money by using cheaper models for simple tasks. Tested with Anthropic, OpenAI, Gemini, Kimi/Moonshot, and Ollama.
You are DeepSeek-R1-Agent๏ผan effective content creator.
Route model requests based on configured models, costs and task complexity. Use for routing general/low-complexity requests to the cheapest available model and for higher-complexity requests to stronger models.
Manages free AI models from OpenRouter for OpenClaw. Automatically ranks models by quality, configures fallbacks for rate-limit handling, and updates openclaw.json. Use when the user mentions free AI, OpenRouter, model switching, rate limits, or wants to reduce AI costs.
Manage local Ollama models autonomously with health monitoring, automatic fallback, self-healing, and offline operation without internet dependency.
Switch AI models without switching tabs using the HokiPoki CLI. Hop between Claude, Codex, and Gemini when one gets stuck. Use when the user wants to request help from a different AI model, hop to another AI, get a second opinion from another model, switch models, share AI subscriptions with teammates, or manage HokiPoki provider/listener mode. Triggers on: 'use codex/gemini for this', 'hop to another model', 'ask another AI', 'get a second opinion', 'switch models', 'hokipoki', 'listen for requests'.
Monitor Minimax Coding Plan usage to stay within API limits. Fetches current usage stats and provides status alerts.
Configure, run, and troubleshoot the OpenRouter hardware-aware classifier router (wizard setup, local model, routing, and dashboard).
Smart LLM router โ save 67% on inference costs. Routes every request to the cheapest capable model across 41 models from OpenAI, Anthropic, Google, DeepSeek,...
Master prompt engineering for AI models: LLMs, image generators, video models. Techniques: chain-of-thought, few-shot, system prompts, negative prompts. Mode...
Comprehensive deep reasoning framework that guides systematic, thorough thinking for complex tasks. Automatically applies for multi-step problems, ambiguous...
Pollinations.ai API for AI generation and analysis - text, images, videos, audio, vision, and transcription. Use when user requests AI-powered content (text...
openrouter-transcribeTranscribe audio files via OpenRouter using audio-capable models (Gemini, GPT-4o-audio, etc).
Local-first, event-driven RAG for commercial real estate audit & investigation case folders. Index a case directory named like "้กน็ฎ้ฎ้ข็ผๅท__ๆ ้ข" (with stage subfolders such as 01_policy_basis/02_process/04_settlement_payment) and query it with citations (file:// links + PDF
openclaw-aisa-llm-gatewayUnified LLM Gateway - One API for 70+ AI models. Route to GPT, Claude, Gemini, Qwen, Deepseek, Grok and more with a single API key.
Routes LLM requests to the cheapest capable model across 8 providers (Anthropic, Google, OpenAI, DeepSeek, xAI, Moonshot, Mistral, Ollama) and 25+ models. Scores prompts on 8 dimensions in under 1ms, supports three routing modes (eco, standard, gigachad), and logs all decisions for cost tracking.
Control a Reachy Mini robot (by Pollen Robotics / Hugging Face) via its REST API and SSH. Use for any request involving the Reachy Mini robot โ moving the head, body, or antennas; playing emotions or dances; capturing camera snapshots; adjusting volume; managing apps; checking robot status; or any physical robot interaction. The robot has a 6-DoF head, 360ยฐ body rotation, two animated antennas, a wide-angle camera (with non-disruptive WebRTC snapshot), 4-mic array, and speaker.
Defensive interceptor for prompt injection and basic PII masking.
Real-time AI API usage tracking and cost monitoring for OpenClaw. Track spending across OpenAI, Claude, Gemini, Kimi, DeepSeek, and Grok with live dashboard....
Provides a decision-grade equity valuation playbook and report standard (multiples, DCF, quality assessment, scenarios, margin of safety); used when users re...
Build and run Gemini 2.5 Computer Use browser-control agents with Playwright. Use when a user wants to automate web browser tasks via the Gemini Computer Use model, needs an agent loop (screenshot โ function_call โ action โ function_response), or asks to integrate safety confirmation for risky UI actions.
Query the OpenAI developer documentation via the OpenAI Docs MCP server using CLI (curl/jq). Use whenever a task involves the OpenAI API (Responses, Chat Completions, Realtime, etc.), OpenAI SDKs, ChatGPT Apps SDK, Codex, MCP integrations, endpoint schemas, parameters, limits, or migrations and you need up-to-date official guidance.
Multi-model peer review layer using local LLMs via Ollama to catch errors in cloud model output. Fan-out critiques to 2-3 local models, aggregate flags, synthesize consensus. Use when: validating trade analyses, reviewing agent output quality, testing local model accuracy, checking any high-stakes Claude output before publishing or acting on it. Don't use when: simple fact-checking (just search the web), tasks that don't benefit from multi-model consensus, time-critical decisions where 60s latency is unacceptable, reviewing trivial or low-stakes content. Negative examples: - "Check if this date is correct" โ No. Just web search it. - "Review my grocery list" โ No. Not worth multi-model inference. - "I need this answer in 5 seconds" โ No. Peer review adds 30-60s latency. Edge cases: - Short text (<50 words) โ Models may not find meaningful issues. Consider skipping. - Highly technical domain โ Local models may lack domain knowledge. Weight flags lower. - Creative writing โ Factual review doesn't apply well. Use only for logical consistency.