freeride-opencodeConfigure and optimize OpenCode Zen free models with smart fallbacks for subtasks, heartbeat, and cron jobs. Use when setting up cost-effective AI model routing with automatic failover between free models.
Install via ClawdBot CLI:
clawdbot install Heldinhow/freeride-opencodeConfigure OpenCode Zen free models with intelligent fallbacks to optimize costs while maintaining reliability.
⚠️ Important: To use this skill, you need two API keys:
1. OpenCode Zen API key - For OpenCode free models (MiniMax M2.1, Kimi K2.5, GLM 4.7, GPT 5 Nano)
2. OpenRouter API key - For OpenRouter free models (Trinity Large and other OpenRouter providers)
>
Configure both keys in your OpenCode/Zen settings before applying these configurations.
Apply optimal free model configuration with provider diversification:
{
"agents": {
"defaults": {
"model": {
"primary": "opencode/minimax-m2.1-free",
"fallbacks": [
"openrouter/arcee-ai/trinity-large-preview:free",
"opencode/kimi-k2.5-free"
]
},
"heartbeat": {
"model": "opencode/glm-4.7-free"
},
"subagents": {
"model": "opencode/kimi-k2.5-free"
}
}
}
}
This skill uses models from two different providers, so you need both API keys configured:
Required for:
opencode/minimax-m2.1-freeopencode/kimi-k2.5-freeopencode/glm-4.7-freeopencode/gpt-5-nanoWhere to get: Sign up at OpenCode Zen and generate an API key.
Required for:
openrouter/arcee-ai/trinity-large-preview:freeWhere to get: Sign up at OpenRouter.ai and generate an API key.
Add both keys to your OpenCode configuration:
{
"providers": {
"opencode": {
"api_key": "YOUR_OPENCODE_ZEN_API_KEY"
},
"openrouter": {
"api_key": "YOUR_OPENROUTER_API_KEY"
}
}
}
See models.md for detailed model comparisons, capabilities, and provider information.
| Task Type | Recommended Model | Rationale |
|-----------|------------------|-----------|
| Primary/General | MiniMax M2.1 Free | Best free model capability |
| Fallback 1 | Trinity Large Free | Different provider (OpenRouter) for rate limit resilience |
| Fallback 2 | Kimi K2.5 Free | General purpose, balance |
| Heartbeat | GLM 4.7 Free | Multilingual, cost-effective for frequent checks |
| Subtasks/Subagents | Kimi K2.5 Free | Balanced capability for secondary tasks |
| Model | ID | Best For |
|-------|-----|----------|
| MiniMax M2.1 Free | opencode/minimax-m2.1-free | Complex reasoning, coding (Primary) |
| Trinity Large Free | openrouter/arcee-ai/trinity-large-preview:free | High-quality OpenRouter option (Fallback 1) |
| Kimi K2.5 Free | opencode/kimi-k2.5-free | General purpose, balance (Fallback 2) |
This version implements provider diversification to maximize resilience against rate limits and service disruptions:
"fallbacks": [
"openrouter/arcee-ai/trinity-large-preview:free", // Different provider (OpenRouter)
"opencode/kimi-k2.5-free" // Same provider as primary (OpenCode)
]
Why Provider Diversification Matters:
Fallback triggers:
"heartbeat": {
"every": "30m",
"model": "opencode/gpt-5-nano"
}
Use the cheapest model for frequent, lightweight checks.
"subagents": {
"model": "opencode/kimi-k2.5-free"
}
Good balance for secondary tasks that need reasonable capability.
{
"agents": {
"defaults": {
"model": {
"primary": "opencode/minimax-m2.1-free",
"fallbacks": [
"openrouter/arcee-ai/trinity-large-preview:free",
"opencode/kimi-k2.5-free"
]
},
"models": {
"opencode/minimax-m2.1-free": { "alias": "MiniMax M2.1" },
"opencode/kimi-k2.5-free": { "alias": "Kimi K2.5" },
"openrouter/arcee-ai/trinity-large-preview:free": { "alias": "Trinity Large" }
},
"heartbeat": {
"every": "30m",
"model": "opencode/glm-4.7-free"
},
"subagents": {
"model": "opencode/kimi-k2.5-free"
}
}
}
}
Use OpenClaw CLI:
openclaw config.patch --raw '{
"agents": {
"defaults": {
"model": {
"primary": "opencode/minimax-m2.1-free",
"fallbacks": ["openrouter/arcee-ai/trinity-large-preview:free", "opencode/kimi-k2.5-free"]
},
"heartbeat": { "model": "opencode/glm-4.7-free" },
"subagents": { "model": "opencode/kimi-k2.5-free" }
}
}
}'
Authentication errors (401/403)?
Rate limits still occurring?
Responses too slow?
Model not available?
opencode/model-id-free or openrouter/provider/model:freeOpenRouter models not working?
Complete reference of all free models with capabilities, providers, performance comparisons, and error handling.
Ready-to-use configuration templates for different use cases (minimal, complete, cost-optimized, performance-optimized).
Practical examples showing how to use this skill in real scenarios.
Generated Mar 1, 2026
Early-stage startups can use this skill to set up cost-effective AI model routing for building and testing MVPs without incurring high API costs. The fallback strategy ensures reliability during rapid iteration cycles, allowing developers to focus on product features rather than managing model availability.
Online learning platforms can integrate this skill to provide AI-powered tutoring and content generation using free models, minimizing operational expenses. The provider diversification helps maintain service uptime during peak usage times, ensuring students have consistent access to learning aids.
Small businesses can deploy AI chatbots for handling basic customer inquiries and support tickets, leveraging free models to reduce costs. The fallback mechanism ensures the chatbot remains functional even if one provider experiences issues, improving customer satisfaction.
Bloggers and content creators can use this skill to generate articles, summaries, and social media posts with AI assistance, optimizing for cost by routing through free models. The per-task configurations allow efficient use of models for different content types, such as using cheaper models for heartbeat checks.
Non-profit organizations can apply this skill to analyze donor data and generate reports using AI models, keeping expenses low while benefiting from automated insights. The fallback strategy provides resilience against rate limits, ensuring critical analyses are completed on time.
Offer a basic tier using free models from this skill to attract users, then upsell premium features or higher-tier models for advanced capabilities. This reduces initial infrastructure costs while building a user base.
Provide consulting services to help clients set up and optimize this skill for their specific use cases, such as configuring fallbacks and model selections. Charge for expertise in maximizing cost savings and reliability.
Integrate this skill into a white-label platform that businesses can rebrand for their own AI needs, such as customer support or content generation. Monetize through licensing fees or revenue sharing.
💬 Integration Tip
Ensure both OpenCode Zen and OpenRouter API keys are configured in your settings before applying the configuration to avoid authentication errors during fallback transitions.
Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations.
Provides a 7-step debugging protocol plus language-specific commands to systematically identify, verify, and fix software bugs across multiple environments.
A comprehensive skill for using the Cursor CLI agent for various software engineering tasks (updated for 2026 features, includes tmux automation guide).
Write, run, and manage unit, integration, and E2E tests across TypeScript, Python, and Swift using recommended frameworks.
Control and operate Opencode via slash commands. Use this skill to manage sessions, select models, switch agents (plan/build), and coordinate coding through Opencode.
Coding style memory that adapts to your preferences, conventions, and patterns for consistent coding.