tavily-search-proTavily AI search platform with 5 modes: Search (web/news/finance), Extract (URL content), Crawl (website crawling), Map (sitemap discovery), and Research (deep research with citations). Use for: web search with LLM answers, content extraction, site crawling, deep research.
Install via ClawdBot CLI:
clawdbot install Shaharsha/tavily-search-proInstall dependencies (pip):
Install dependencies (pip)AI-powered web search platform with 5 modes: Search, Extract, Crawl, Map, and Research.
TAVILY_API_KEY environment variable| Env Variable | Default | Description |
|---|---|---|
| TAVILY_API_KEY | ā | Required. Tavily API key |
Set in OpenClaw config:
{
"env": {
"TAVILY_API_KEY": "tvly-..."
}
}
python3 skills/tavily/lib/tavily_search.py <command> "query" [options]
General-purpose web search with optional LLM-synthesized answer.
python3 lib/tavily_search.py search "query" [options]
Examples:
# Basic search
python3 lib/tavily_search.py search "latest AI news"
# With LLM answer
python3 lib/tavily_search.py search "what is quantum computing" --answer
# Advanced depth (better results, 2 credits)
python3 lib/tavily_search.py search "climate change solutions" --depth advanced
# Time-filtered
python3 lib/tavily_search.py search "OpenAI announcements" --time week
# Domain filtering
python3 lib/tavily_search.py search "machine learning" --include-domains arxiv.org,nature.com
# Country boost
python3 lib/tavily_search.py search "tech startups" --country US
# With raw content and images
python3 lib/tavily_search.py search "solar energy" --raw --images -n 10
# JSON output
python3 lib/tavily_search.py search "bitcoin price" --json
Output format (text):
Answer: <LLM-synthesized answer if --answer>
Results:
1. Result Title
https://example.com/article
Content snippet from the page...
2. Another Result
https://example.com/other
Another snippet...
Search optimized for news articles. Sets topic=news.
python3 lib/tavily_search.py news "query" [options]
Examples:
python3 lib/tavily_search.py news "AI regulation"
python3 lib/tavily_search.py news "Israel tech" --time day --answer
python3 lib/tavily_search.py news "stock market" --time week -n 10
Search optimized for financial data and news. Sets topic=finance.
python3 lib/tavily_search.py finance "query" [options]
Examples:
python3 lib/tavily_search.py finance "NVIDIA stock analysis"
python3 lib/tavily_search.py finance "cryptocurrency market trends" --time month
python3 lib/tavily_search.py finance "S&P 500 forecast 2026" --answer
Extract readable content from one or more URLs.
python3 lib/tavily_search.py extract URL [URL...] [options]
Parameters:
urls: One or more URLs to extract (positional args)--depth basic|advanced: Extraction depth--format markdown|text: Output format (default: markdown)--query "text": Rerank extracted chunks by relevance to queryExamples:
# Extract single URL
python3 lib/tavily_search.py extract "https://example.com/article"
# Extract multiple URLs
python3 lib/tavily_search.py extract "https://url1.com" "https://url2.com"
# Advanced extraction with relevance reranking
python3 lib/tavily_search.py extract "https://arxiv.org/paper" --depth advanced --query "transformer architecture"
# Text format output
python3 lib/tavily_search.py extract "https://example.com" --format text
Output format:
URL: https://example.com/article
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
<Extracted content in markdown/text>
URL: https://another.com/page
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
<Extracted content>
Crawl a website starting from a root URL, following links.
python3 lib/tavily_search.py crawl URL [options]
Parameters:
url: Root URL to start crawling--depth basic|advanced: Crawl depth--max-depth N: Maximum link depth to follow (default: 2)--max-breadth N: Maximum pages per depth level (default: 10)--limit N: Maximum total pages (default: 10)--instructions "text": Natural language crawl instructions--select-paths p1,p2: Only crawl these path patterns--exclude-paths p1,p2: Skip these path patterns--format markdown|text: Output formatExamples:
# Basic crawl
python3 lib/tavily_search.py crawl "https://docs.example.com"
# Focused crawl with instructions
python3 lib/tavily_search.py crawl "https://docs.python.org" --instructions "Find all asyncio documentation" --limit 20
# Crawl specific paths only
python3 lib/tavily_search.py crawl "https://example.com" --select-paths "/blog,/docs" --max-depth 3
Output format:
Crawled 5 pages from https://docs.example.com
Page 1: https://docs.example.com/intro
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
<Content>
Page 2: https://docs.example.com/guide
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
<Content>
Discover all URLs on a website (sitemap).
python3 lib/tavily_search.py map URL [options]
Parameters:
url: Root URL to map--max-depth N: Depth to follow (default: 2)--max-breadth N: Breadth per level (default: 20)--limit N: Maximum URLs (default: 50)Examples:
# Map a site
python3 lib/tavily_search.py map "https://example.com"
# Deep map
python3 lib/tavily_search.py map "https://docs.python.org" --max-depth 3 --limit 100
Output format:
Sitemap for https://example.com (42 URLs found):
1. https://example.com/
2. https://example.com/about
3. https://example.com/blog
...
Comprehensive AI-powered research on a topic with citations.
python3 lib/tavily_search.py research "query" [options]
Parameters:
query: Research question--model mini|pro|auto: Research model (default: auto)mini: Faster, cheaperpro: More thoroughauto: Let Tavily decide--json: JSON output (supports structured output schema)Examples:
# Basic research
python3 lib/tavily_search.py research "Impact of AI on healthcare in 2026"
# Pro model for thorough research
python3 lib/tavily_search.py research "Comparison of quantum computing approaches" --model pro
# JSON output
python3 lib/tavily_search.py research "Electric vehicle market analysis" --json
Output format:
Research: Impact of AI on healthcare in 2026
<Comprehensive research report with citations>
Sources:
[1] https://source1.com
[2] https://source2.com
...
| Option | Applies To | Description | Default |
|---|---|---|---|
| --depth basic\|advanced | search, news, finance, extract | Search/extraction depth | basic |
| --time day\|week\|month\|year | search, news, finance | Time range filter | none |
| -n NUM | search, news, finance | Max results (0-20) | 5 |
| --answer | search, news, finance | Include LLM answer | off |
| --raw | search, news, finance | Include raw page content | off |
| --images | search, news, finance | Include image URLs | off |
| --include-domains d1,d2 | search, news, finance | Only these domains | none |
| --exclude-domains d1,d2 | search, news, finance | Exclude these domains | none |
| --country XX | search, news, finance | Boost country results | none |
| --json | all | Structured JSON output | off |
| --format markdown\|text | extract, crawl | Content format | markdown |
| --query "text" | extract | Relevance reranking query | none |
| --model mini\|pro\|auto | research | Research model | auto |
| --max-depth N | crawl, map | Max link depth | 2 |
| --max-breadth N | crawl, map | Max pages per level | 10/20 |
| --limit N | crawl, map | Max total pages/URLs | 10/50 |
| --instructions "text" | crawl | Natural language instructions | none |
| --select-paths p1,p2 | crawl | Include path patterns | none |
| --exclude-paths p1,p2 | crawl | Exclude path patterns | none |
| API | Basic | Advanced |
|---|---|---|
| Search | 1 credit | 2 credits |
| Extract | 1 credit/URL | 2 credits/URL |
| Crawl | 1 credit/page | 2 credits/page |
| Map | 1 credit | 1 credit |
| Research | Varies by model | - |
bash skills/tavily/install.sh
Generated Mar 1, 2026
Startups can use the search and research modes to gather competitive intelligence, analyze industry trends, and identify market gaps. The ability to filter by domains and time ensures up-to-date, relevant data for strategic planning.
Media companies leverage the news and finance search modes to monitor breaking stories and financial developments. The extract and crawl features help aggregate content from multiple sources for curated news feeds or investigative reporting.
Researchers and academics use the search mode with domain filtering (e.g., arxiv.org) to find recent papers and the extract mode to pull content from URLs. The research mode with citations supports deep analysis for literature reviews or grant proposals.
Digital marketing agencies employ the crawl and map modes to analyze website structures, identify broken links, and discover sitemaps for SEO optimization. The extract mode helps assess content quality across pages.
Investors and financial analysts utilize the finance search mode to track stock trends, company news, and market forecasts. Time-filtered searches and LLM answers provide synthesized insights for investment decisions.
Offer the skill as a white-labeled API service for businesses needing web search and content extraction. Charge based on usage tiers (e.g., credits per search) and provide custom integrations for clients in media or research.
Provide consulting services to help organizations implement the skill for specific use cases like market research or SEO auditing. Develop tailored scripts or workflows and charge project-based or retainer fees.
Build a platform that uses the skill to aggregate and curate content from the web, such as news digests or research summaries. Monetize through advertising, premium subscriptions, or licensing the aggregated data to third parties.
š¬ Integration Tip
Ensure the TAVILY_API_KEY is securely stored in environment variables and test commands with sample queries to verify output formats before full deployment.
Summarize URLs or files with the summarize CLI (web, PDFs, images, audio, YouTube).
AI-optimized web search via Tavily API. Returns concise, relevant results for AI agents.
This skill should be used when users need to search the web for information, find current content, look up news articles, search for images, or find videos. It uses DuckDuckGo's search API to return results in clean, formatted output (text, markdown, or JSON). Use for research, fact-checking, finding recent information, or gathering web resources.
Web search and content extraction via Brave Search API. Use for searching documentation, facts, or any web content. Lightweight, no browser required.
Search indexed Discord community discussions via Answer Overflow. Find solutions to coding problems, library issues, and community Q&A that only exist in Discord conversations.
Multi search engine integration with 17 engines (8 CN + 9 Global). Supports advanced search operators, time filters, site search, privacy engines, and WolframAlpha knowledge queries. No API keys required.