external-ai-integrationLeverage external AI models (ChatGPT, Claude, Hugging Face, etc.) as tools via browser automation (Chrome Relay) and optional Hugging Face API. Use when you...
Install via ClawdBot CLI:
clawdbot install konscious0beast/external-ai-integrationThis skill provides patterns for using external AI models as tools that the assistant can call on‑demand. It extends existing browser‑automation and API‑integration skills, enabling the assistant to:
Use Chrome Relay to automate interactions with ChatGPT, Claude, Gemini, or any other web‑based LLM that requires a browser interface.
Prerequisites:
chatgpt.com, claude.ai) already logged in (session cookies present).memory/patterns/playbooks.md – “Browser Automation (Chrome Relay)”).Steps:
profile="chrome").refs="aria" for stable references).Example workflow:
# This is a conceptual example; actual implementation uses browser tool calls.
def ask_chatgpt(prompt):
# 1. Ensure Chrome Relay is attached
browser(action="open", profile="chrome", targetUrl="https://chatgpt.com")
# 2. Snapshot to get references
snap = browser(action="snapshot", refs="aria")
# 3. Find input field (aria role="textbox") and send button
input_ref = snap.find_element(role="textbox", name="Message")
send_ref = snap.find_element(role="button", name="Send")
# 4. Type prompt and click send
browser(action="act", request={"kind":"type", "ref":input_ref, "text":prompt})
browser(action="act", request={"kind":"click", "ref":send_ref})
# 5. Wait for response (simplified)
time.sleep(10)
# 6. Snapshot again, extract response from last message bubble
snap2 = browser(action="snapshot", refs="aria")
response_element = snap2.find_last_message()
return response_element.text
Key considerations:
For models hosted on Hugging Face Spaces or the Inference API, you can call them directly via HTTP requests.
Prerequisites:
"gpt2", "google/flan-t5-large", "microsoft/DialoGPT-medium").Steps:
~/.huggingface/token).curl or exec with requests Python module.Example script (using curl):
#!/bin/bash
set -e
MODEL="google/flan-t5-large"
PROMPT="Translate English to German: How are you?"
API_TOKEN=$(op read "op://Personal/HuggingFace/api_token")
curl -s "https://api-inference.huggingface.co/models/$MODEL" \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"inputs\": \"$PROMPT\"}" | jq -r '.[0].generated_text'
Example Python function (using requests):
import requests
import os
def hf_inference(model, inputs, parameters=None):
api_token = os.getenv("HF_TOKEN") # or retrieve via 1Password
url = f"https://api-inference.huggingface.co/models/{model}"
headers = {"Authorization": f"Bearer {api_token}"}
payload = {"inputs": inputs}
if parameters:
payload.update(parameters)
resp = requests.post(url, headers=headers, json=payload)
resp.raise_for_status()
return resp.json()
Key considerations:
{"options":{"wait_for_model":true}} in parameters.Instead of spawning a sub‑agent, the assistant calls external AI within its own reasoning flow.
Pattern:
Example decision logic:
def external_ai_assist(task_type, prompt):
if task_type == "code_review":
# Use Claude via browser automation
return ask_claude(prompt)
elif task_type == "translation":
# Use Hugging Face translation model
return hf_inference("Helsinki-NLP/opus-mt-en-de", prompt)
elif task_type == "creative_writing":
# Use ChatGPT via browser automation
return ask_chatgpt(prompt)
else:
raise ValueError(f"No external AI configured for {task_type}")
External models may require different prompting styles than the assistant's native model.
"Translate English to German: ..." for T5).Example prompt for code review:
You are an expert software engineer reviewing the following code snippet. Please:
1. Identify potential bugs or security issues.
2. Suggest performance improvements.
3. Comment on code style and readability.
4. Output your review as a JSON with keys "bugs", "performance", "style".
Code:python
def calculate_average(numbers):
total = 0
for n in numbers:
total += n
return total / len(numbers)
External services can fail; plan for graceful degradation.
memory/YYYY‑MM‑DD.md with tag external‑ai‑failure for later analysis.Example fallback structure:
try:
response = ask_chatgpt(prompt)
except (BrowserError, TimeoutError) as e:
log_failure("ChatGPT", e)
# Fallback to Hugging Face
response = hf_inference("google/flan-t5-xxl", prompt)
except Exception as e:
log_failure("All external AI", e)
response = None
if response:
integrate(response)
else:
# Continue with assistant's own reasoning
pass
Scenario: The assistant is asked to review a complex React component. It uses Claude (via Chrome Relay) for a detailed second opinion.
Steps:
ask_claude(prompt) using browser automation.Scenario: User provides a paragraph in English and asks for a German translation. Assistant calls Hugging Face translation model.
Steps:
"Translate English to German: " .hf_inference("Helsinki-NLP/opus-mt-en-de", prompt).Scenario: User needs ideas for a blog post title. Assistant uses ChatGPT to generate 10 options.
Steps:
Scenario: User asks for a strategic analysis of a business decision. Assistant uses its own reasoning, then asks ChatGPT for potential blind spots.
Steps:
docs/browser-automation.md – Chrome Relay setup and commands.skills/huggingface/SKILL.md – Hugging Face API usage.skills/1password/SKILL.md – retrieving secrets.memory/patterns/playbooks.md – Browser Automation playbook.scripts/external_ai_integration.py (this skill's core implementation).playbooks/external-ai-integration-playbook.md (orchestration playbook).When a task would benefit from external AI reasoning, read this skill to decide which model to use and how to call it. Store successful patterns in memory/patterns/tools.md. Update pending.md if external AI fails repeatedly and needs manual configuration.
This skill increases autonomy by expanding the assistant's toolset with external AI models, allowing it to tackle a wider range of tasks without spawning sub‑agents and maintaining control over the workflow.
Generated Mar 1, 2026
Law firms can use this skill to automate the review of contracts by leveraging external AI models like Claude for detailed clause analysis. The assistant can extract key terms, compare with standard templates, and generate summaries, improving efficiency and reducing manual oversight.
E-commerce platforms can integrate ChatGPT via browser automation to handle complex customer inquiries, such as troubleshooting product issues or providing personalized recommendations. This augments the assistant's ability to offer real-time, human-like responses without spawning separate agents.
Software development teams can use this skill to call external models like ChatGPT for automated code reviews and bug detection. The assistant can submit code snippets, receive feedback on best practices, and integrate suggestions into the development workflow, speeding up the review process.
Media companies can leverage Hugging Face API models for real-time translation of articles or social media posts. The assistant can process text inputs, call specialized translation models, and output localized content, enabling faster global content distribution.
Investment firms can use this skill to automate the summarization of lengthy financial reports using external AI models like Claude. The assistant can extract key insights, trends, and risks, providing concise summaries for analysts to make informed decisions quickly.
Offer this skill as part of a subscription-based AI assistant platform for businesses, charging monthly fees based on usage tiers. Revenue is generated from enterprises needing scalable external AI integration without in-house development.
Provide custom integration services to companies looking to deploy this skill for specific use cases, such as legal or customer support. Revenue comes from one-time project fees and ongoing maintenance contracts.
Create a marketplace where developers can access pre-configured external AI model integrations via APIs, with revenue generated from pay-per-use transactions or commission on model calls. This model targets tech startups and indie developers.
đź’¬ Integration Tip
Ensure Chrome Relay is properly configured and logged into target LLM sites before automation to avoid session issues, and store API tokens securely using tools like 1Password for Hugging Face integrations.
Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Gemini CLI for one-shot Q&A, summaries, and generation.
Research any topic from the last 30 days on Reddit + X + Web, synthesize findings, and write copy-paste-ready prompts. Use when the user wants recent social/web research on a topic, asks "what are people saying about X", or wants to learn current best practices. Requires OPENAI_API_KEY and/or XAI_API_KEY for full Reddit+X access, falls back to web search.
Check Antigravity account quotas for Claude and Gemini models. Shows remaining quota and reset times with ban detection.
Manages free AI models from OpenRouter for OpenClaw. Automatically ranks models by quality, configures fallbacks for rate-limit handling, and updates opencla...
Manages free AI models from OpenRouter for OpenClaw. Automatically ranks models by quality, configures fallbacks for rate-limit handling, and updates openclaw.json. Use when the user mentions free AI, OpenRouter, model switching, rate limits, or wants to reduce AI costs.