tinyfish-web-agentUse TinyFish web agent to extract/scrape websites, extract data, and automate browser actions using natural language. Use when you need to extract/scrape data from websites, handle bot-protected sites, or automate web tasks.
Install via ClawdBot CLI:
clawdbot install simantak-dabhade/tinyfish-web-agentRequires: TINYFISH_API_KEY environment variable
Before making any API call, always run this first to verify the key is available:
[ -n "$TINYFISH_API_KEY" ] && echo "TINYFISH_API_KEY is set" || echo "TINYFISH_API_KEY is NOT set"
If the key is not set, you MUST stop and ask the user to add their API key. Do NOT fall back to other tools or approaches — the task requires TinyFish.
Tell the user:
You need a TinyFish API key. Get one at:
>
Then set it so the agent can use it:
>
Option 1 — Environment variable (works everywhere):
> export TINYFISH_API_KEY="your-key-here" >
>
Option 2 — Claude Code settings (Claude Code only):
Add to ~/.claude/settings.local.json:
> { > "env": { > "TINYFISH_API_KEY": "your-key-here" > } > } >
Do NOT proceed until the key is confirmed available.
Extract data from a page. Specify the JSON structure you want:
curl -N -s -X POST "https://agent.tinyfish.ai/v1/automation/run-sse" \
-H "X-API-Key: $TINYFISH_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"goal": "Extract product info as JSON: {\"name\": str, \"price\": str, \"in_stock\": bool}"
}'
Extract lists of data with explicit structure:
curl -N -s -X POST "https://agent.tinyfish.ai/v1/automation/run-sse" \
-H "X-API-Key: $TINYFISH_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com/products",
"goal": "Extract all products as JSON array: [{\"name\": str, \"price\": str, \"url\": str}]"
}'
For bot-protected sites, add "browser_profile": "stealth" to the request body:
curl -N -s -X POST "https://agent.tinyfish.ai/v1/automation/run-sse" \
-H "X-API-Key: $TINYFISH_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://protected-site.com",
"goal": "Extract product data as JSON: {\"name\": str, \"price\": str, \"description\": str}",
"browser_profile": "stealth"
}'
Route through a specific country by adding "proxy_config" to the body:
curl -N -s -X POST "https://agent.tinyfish.ai/v1/automation/run-sse" \
-H "X-API-Key: $TINYFISH_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://geo-restricted-site.com",
"goal": "Extract pricing data as JSON: {\"item\": str, \"price\": str, \"currency\": str}",
"browser_profile": "stealth",
"proxy_config": {"enabled": true, "country_code": "US"}
}'
The SSE stream returns data: {...} lines. The final result is the event where type == "COMPLETE" and status == "COMPLETED" — the extracted data is in the resultJson field. Claude reads the raw SSE output directly; no script-side parsing is needed.
When extracting from multiple independent sources, make separate parallel curl calls instead of combining into one prompt:
Good - Parallel calls:
# Compare pizza prices - run these simultaneously
curl -N -s -X POST "https://agent.tinyfish.ai/v1/automation/run-sse" \
-H "X-API-Key: $TINYFISH_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://pizzahut.com",
"goal": "Extract pizza prices as JSON: [{\"name\": str, \"price\": str}]"
}'
curl -N -s -X POST "https://agent.tinyfish.ai/v1/automation/run-sse" \
-H "X-API-Key: $TINYFISH_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://dominos.com",
"goal": "Extract pizza prices as JSON: [{\"name\": str, \"price\": str}]"
}'
Bad - Single combined call:
# Don't do this - less reliable and slower
curl -N -s -X POST "https://agent.tinyfish.ai/v1/automation/run-sse" \
-H "X-API-Key: $TINYFISH_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://pizzahut.com",
"goal": "Extract prices from Pizza Hut and also go to Dominos..."
}'
Each independent extraction task should be its own API call. This is faster (parallel execution) and more reliable.
Generated Mar 1, 2026
Retailers and brands use TinyFish to track competitor pricing across multiple e-commerce sites. By automating data extraction, they can adjust their own prices in real-time to stay competitive and maximize margins.
Startups leverage TinyFish to gather market intelligence, such as product features and customer reviews from various websites. This helps in identifying market gaps and refining their product strategies without manual data collection.
Real estate agencies use TinyFish to scrape property listings from multiple portals, extracting details like price, location, and amenities. This automates the compilation of comprehensive market reports for clients.
Financial analysts employ TinyFish to extract stock prices, economic indicators, or news articles from bot-protected financial websites. This supports automated reporting and investment decision-making with up-to-date data.
Researchers use TinyFish to gather data from scholarly articles, government databases, or social media for analysis. It handles complex sites with stealth mode, ensuring reliable data for studies and publications.
Offer TinyFish as a cloud-based service with tiered pricing based on API usage, such as number of requests or data volume. This model provides recurring revenue and scales with customer growth, targeting businesses needing regular web data.
Sell custom licenses to large corporations for high-volume data extraction, including dedicated support and integration services. This model focuses on long-term contracts and premium features like advanced stealth and proxy configurations.
Provide a free tier with limited API calls to attract individual developers and small teams, then monetize through paid credits for additional usage. This encourages adoption and upsells as users scale their data needs.
💬 Integration Tip
Always verify the TINYFISH_API_KEY environment variable before use and structure API calls with explicit JSON formats for reliable data extraction.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection