model-setup安全地管理 OpenClaw 模型配置。用于添加、测试和配置新模型到 models.json,包括 API key 验证、模型可访问性测试、工具调用功能检测、设置默认模型和配置到特定 agent。所有操作都会自动备份配置文件以确保安全。
Install via ClawdBot CLI:
clawdbot install YKaiXu/model-setup安全地管理 OpenClaw 模型配置。
询问用户以下信息:
必需参数:
openai, anthropic, 或自定义 ID)https://api.openai.com/v1)key_id:secret)gpt-4, claude-3-opus)GPT-4 (OpenAI))可选参数:
openai-completions)false)["text"])false)使用 scripts/test_model.py 测试模型配置:
python3 scripts/test_model.py '<provider_config_json>' '<model_id>' [--test-tool-calling] [--test-streaming]
测试内容包括:
--test-tool-calling)--test-streaming)如果测试失败,告知用户错误原因并允许修正配置。
询问用户是否需要测试工具调用功能。如果需要,发送一个包含工具调用的测试请求,验证模型是否正确处理工具调用。
向用户展示完整的配置摘要,包括:
询问用户是否确认添加。
询问用户是否将此模型设为默认模型。
询问用户是否需要将此模型配置给特定 agent。如果需要,询问 agent 路径(如 /home/yupeng/.openclaw/agents/main)。
使用 scripts/add_model.py 添加模型配置:
python3 scripts/add_model.py '<config_path>' '<provider_id>' '<provider_config_json>' '<model_config_json>' [--default] [--agent <agent_path>]
脚本会自动:
.json.backup.YYYYMMDD_HHMMSS)检查操作结果:
/home/yupeng/.openclaw/agents/main/agent/models.json/home/yupeng/.openclaw/agents/main/agent/config.json/home/yupeng/.openclaw/agents//agent/config.json 使用 scripts/list_models.py 列出所有已配置的模型:
# JSON 格式输出
python3 scripts/list_models.py
# 格式化文本输出
python3 scripts/list_models.py --format
# 指定配置文件路径
python3 scripts/list_models.py /path/to/models.json --format
输出内容包括:
{
"provider_id": "openai",
"provider_config": {
"baseUrl": "https://api.openai.com/v1",
"apiKey": "sk-xxx:yyy",
"api": "openai-completions"
},
"model_config": {
"id": "gpt-4",
"name": "GPT-4 (OpenAI)",
"reasoning": false,
"input": ["text"],
"cost": {
"input": 0.03,
"output": 0.06,
"cacheRead": 0.001,
"cacheWrite": 0.004
},
"contextWindow": 128000,
"maxTokens": 4096,
"api": "openai-completions"
}
}
{
"provider_id": "anthropic",
"provider_config": {
"baseUrl": "https://api.anthropic.com/v1",
"apiKey": "sk-ant-xxx",
"api": "anthropic-completions"
},
"model_config": {
"id": "claude-3-opus-20240229",
"name": "Claude 3 Opus (Anthropic)",
"reasoning": false,
"input": ["text"],
"cost": {
"input": 0.015,
"output": 0.075
},
"contextWindow": 200000,
"maxTokens": 4096,
"api": "anthropic-completions"
}
}
常见错误及解决方案:
Generated Mar 1, 2026
A development team needs to add and test new AI models like GPT-4 or Claude 3 into their OpenClaw-based applications. This skill automates the configuration process, ensuring API keys are validated and models are accessible before deployment, reducing setup errors and downtime.
A large enterprise deploys custom AI agents for internal tools, requiring secure management of multiple model configurations across different departments. This skill helps set up models with backups and validation, ensuring reliability and compliance with security protocols.
Researchers at an academic institution test various AI models for experiments, needing to quickly switch between providers and verify tool-calling capabilities. This skill streamlines model testing and configuration, enabling efficient experimentation without manual file edits.
A startup scaling its AI-powered product needs to integrate new models as user demand grows. This skill facilitates adding models with cost tracking and default settings, helping optimize performance and manage expenses during expansion.
A consultancy firm builds tailored AI agents for clients, requiring model configurations that support specific tools and streaming features. This skill ensures each agent is correctly configured with validated models, enhancing client deliverables and reducing support calls.
Offer a platform where users pay monthly for access to multiple AI models managed via this skill. Revenue comes from tiered subscriptions based on model usage and features, with automated setup reducing operational costs.
Provide managed services to businesses for configuring and maintaining their AI models. Charge a fee for setup, testing, and ongoing support, leveraging this skill to ensure reliable and secure model deployments.
License this skill as part of an enterprise AI toolkit to large organizations. Generate revenue through one-time licenses or annual fees, with value in reducing IT overhead and enhancing model management security.
💬 Integration Tip
Ensure scripts like test_model.py and add_model.py are executable and paths are correctly set in the environment to avoid permission errors during model validation.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection