firecrawlerWeb scraping and crawling with Firecrawl API. Fetch webpage content as markdown, take screenshots, extract structured data, search the web, and crawl documentation sites. Use when the user needs to scrape a URL, get current web info, capture a screenshot, extract specific data from pages, or crawl docs for a framework/library.
Install via ClawdBot CLI:
clawdbot install capt-marbles/firecrawlerScrape, search, and crawl the web using Firecrawl.
export FIRECRAWL_API_KEY=fc-your-key-here
pip3 install firecrawl
All commands use the bundled fc.py script in this skill's directory.
Fetch any URL and convert to clean markdown. Handles JavaScript-rendered content.
python3 fc.py markdown "https://example.com"
python3 fc.py markdown "https://example.com" --main-only # skip nav/footer
Capture a full-page screenshot of any URL.
python3 fc.py screenshot "https://example.com" -o screenshot.png
Pull specific fields from a page using a JSON schema.
Schema example (schema.json):
{
"type": "object",
"properties": {
"title": { "type": "string" },
"price": { "type": "number" },
"features": { "type": "array", "items": { "type": "string" } }
}
}
python3 fc.py extract "https://example.com/product" --schema schema.json
python3 fc.py extract "https://example.com/product" --schema schema.json --prompt "Extract the main product details"
Search the web and get content from results (may require paid tier).
python3 fc.py search "Python 3.13 new features" --limit 5
Crawl an entire documentation site. Great for learning new frameworks.
python3 fc.py crawl "https://docs.example.com" --limit 30
python3 fc.py crawl "https://docs.example.com" --limit 50 --output ./docs
Note: Each page costs 1 credit. Set reasonable limits.
Discover all URLs on a website before deciding what to scrape.
python3 fc.py map "https://example.com" --limit 100
python3 fc.py map "https://example.com" --search "api"
Free tier includes 500 credits. 1 credit = 1 page/screenshot/search query.
Generated Mar 1, 2026
E-commerce businesses can use Firecrawl to scrape competitor product pages, extracting titles, prices, and features to monitor pricing strategies and product offerings. This helps in adjusting their own listings and identifying market gaps.
News aggregators can fetch articles from various websites as markdown, enabling quick summarization and content curation. This supports creating digestible news feeds or monitoring trends across sources.
Software companies can crawl documentation sites of frameworks or libraries to build internal knowledge bases or training materials. This aids in onboarding developers and providing up-to-date support resources.
Real estate agencies can extract structured data from property listings, such as prices, locations, and amenities, to analyze market trends and inform investment decisions. Screenshots can also capture visual details of listings.
Digital marketing agencies can map URLs and extract content from client websites to audit SEO performance, identify broken links, and ensure content consistency. This helps in optimizing search engine rankings.
Offer a free tier with 500 credits to attract individual users, then charge for higher limits or advanced features like web search and bulk crawling. Revenue comes from subscription plans based on credit usage.
Integrate Firecrawl into a larger platform or tool, such as a data analytics suite, and resell access to clients with added value like custom schemas or support. Revenue is generated through markups on API calls.
Provide specialized services for businesses needing web scraping, such as setting up extraction schemas, crawling documentation, or conducting competitive analysis. Revenue comes from project-based or hourly consulting fees.
💬 Integration Tip
Ensure the FIRECRAWL_API_KEY is securely set as an environment variable to avoid hardcoding keys in scripts, and use the bundled fc.py script for easy command-line execution.
Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with w...
Playwright-based web scraping OpenClaw Skill with anti-bot protection. Successfully tested on complex sites like Discuss.com.hk.
Browser automation and web scraping with Playwright. Forms, screenshots, data extraction. Works standalone or via MCP. Testing included.
Performs deep scraping of complex sites like YouTube using containerized Crawlee, extracting validated, ad-free transcripts and content as JSON output.
Automate web tasks like form filling, data scraping, testing, monitoring, and scheduled jobs with multi-browser support and retry mechanisms.
Web scraping and content comprehension agent — multi-strategy extraction with cascade fallback, news detection, boilerplate removal, structured metadata, and...