serperGoogle search via Serper API with full page content extraction. Fast API lookup + concurrent page scraping (3s timeout). One well-crafted query returns rich results — avoid multiple calls. Two modes, explicit locale control. API key via .env.
Install via ClawdBot CLI:
clawdbot install nesdeq/serperGoogle search via Serper API. Fetches results AND reads the actual web pages to extract clean full-text content via trafilatura. Not just snippets — full article text.
Each invocation gives you 5 results (default mode) or up to 6 results (current mode), each with full page content. This is already a lot of information.
Craft ONE good search query. That is almost always enough.
Each call returns multiple results with full page text — you get broad coverage from a single query. Do not run multiple searches to "explore" a topic. One well-chosen query with the right mode covers it.
At most two calls if the user's request genuinely spans two distinct topics (e.g. "compare X vs Y" where X and Y need separate searches, or one default + one current call for different aspects). Never more than two.
Do NOT:
Use serper when:
Do NOT use this skill for:
IMPORTANT: This skill already fetches and extracts full page content. Do NOT use web_fetch, WebFetch, or any other URL-fetching tool on the URLs returned by this skill. The content is already included in the output.
There are exactly two modes. Pick the right one based on the query:
default — General search (all-time)current — News and recent info| Query signals | Mode |
|---------------|------|
| "how does X work", "what is X", "explain X" | default |
| Product research, comparisons, tutorials | default |
| Technical documentation, guides | default |
| Historical topics, evergreen content | default |
| "news", "latest", "today", "this week", "recent" | current |
| "what happened", "breaking", "announced", "released" | current |
| Current events, politics, sports scores, stock prices | current |
Default is global — no country filter, English results. This ONLY works for English queries.
You MUST ALWAYS set --gl and --hl when ANY of these are true:
If the user writes in German, you MUST pass --gl de --hl de. No exceptions.
| Scenario | Flags |
|----------|-------|
| English query, no country target | (omit --gl and --hl) |
| German query OR user writes in German OR targeting DE/AT/CH | --gl de --hl de |
| French query OR user writes in French OR targeting France | --gl fr --hl fr |
| Any other non-English language/country | --gl XX --hl XX (ISO codes) |
Rule of thumb: If the query string contains non-English words, set --gl and --hl to match that language.
python3 scripts/search.py -q "QUERY" [--mode MODE] [--gl COUNTRY] [--hl LANG]
# English, general research
python3 scripts/search.py -q "how does HTTPS work"
# English, time-sensitive
python3 scripts/search.py -q "OpenAI latest announcements" --mode current
# German query — set locale + current mode for news/prices
python3 scripts/search.py -q "aktuelle Preise iPhone" --mode current --gl de --hl de
# German news
python3 scripts/search.py -q "Nachrichten aus Berlin" --mode current --gl de --hl de
# French product research
python3 scripts/search.py -q "meilleur smartphone 2026" --gl fr --hl fr
The output is a streamed JSON array — elements print one at a time as each page is scraped:
[{"query": "...", "mode": "default", "locale": {"gl": "world", "hl": "en"}, "results": [{"title": "...", "url": "...", "source": "web"}, ...]}
,{"title": "...", "url": "...", "source": "web", "content": "Full extracted page text..."}
,{"title": "...", "url": "...", "source": "news", "date": "2 hours ago", "content": "Full article text..."}
]
The first element is search metadata. Each following element contains a result with full extracted content.
Result fields:
title — page titleurl — source URLsource — "web", "news", or "knowledge_graph"content — full extracted page text (falls back to search snippet if extraction fails)date — present when available (news results always, web results sometimes)| Flag | Description |
|------|-------------|
| -q, --query | Search query (required) |
| -m, --mode | default (all-time, 5 results) or current (past week + news, 3 each) |
| --gl | Country code (e.g. de, us, fr, at, ch) |
| --hl | Language code (e.g. en, de, fr) |
Generated Mar 1, 2026
Companies launching new products can use serper to gather full-text reviews, competitor analyses, and pricing information from global sources. The concurrent scraping ensures fast data collection, while locale control allows targeting specific regions for localized insights.
Financial firms leverage serper in current mode to track breaking news, earnings reports, and market developments with full article extraction. This enables quick analysis of events impacting stocks or investments, avoiding delays from manual web searches.
Software teams use serper in default mode to fetch and extract complete tutorials, API docs, and troubleshooting guides from multiple sources. The full-content output eliminates the need for additional URL fetching, streamlining research for coding projects.
Media organizations employ serper with locale flags to gather news articles and reports in non-English languages, such as German or French. This supports creating region-specific content without language barriers, using the current mode for timely updates.
Researchers utilize serper to collect full-text academic papers, studies, and historical data from the web for literature reviews. The default mode provides broad coverage, while disciplined querying avoids redundant searches across topics.
Offer serper as a cloud-based API service where businesses pay monthly fees for enriched search results with full-text extraction. Revenue comes from tiered plans based on query volume and advanced features like custom timeouts or additional locales.
Sell licenses to large corporations for integrating serper into their internal research platforms, such as CRM or analytics systems. Revenue is generated through one-time setup fees and annual maintenance contracts for support and updates.
Operate a consulting agency that uses serper to provide clients with customized reports on competitors, trends, or news. Revenue streams include project-based fees for delivering analyzed data and ongoing retainer agreements for continuous monitoring.
💬 Integration Tip
Ensure API keys are securely stored in .env files and test queries with locale flags to avoid errors in non-English searches.
Summarize URLs or files with the summarize CLI (web, PDFs, images, audio, YouTube).
AI-optimized web search via Tavily API. Returns concise, relevant results for AI agents.
This skill should be used when users need to search the web for information, find current content, look up news articles, search for images, or find videos. It uses DuckDuckGo's search API to return results in clean, formatted output (text, markdown, or JSON). Use for research, fact-checking, finding recent information, or gathering web resources.
Web search and content extraction via Brave Search API. Use for searching documentation, facts, or any web content. Lightweight, no browser required.
Search indexed Discord community discussions via Answer Overflow. Find solutions to coding problems, library issues, and community Q&A that only exist in Discord conversations.
Multi search engine integration with 17 engines (8 CN + 9 Global). Supports advanced search operators, time filters, site search, privacy engines, and WolframAlpha knowledge queries. No API keys required.