fast-browser-use-1-0-5Rust-based Chrome automation for ultra-fast, token-efficient DOM extraction, session management, screenshots, infinite scroll harvesting, and sitemap analysis.
Install via ClawdBot CLI:
clawdbot install Makforce/fast-browser-use-1-0-5A Rust-based browser automation engine that provides a lightweight binary driving Chrome directly via CDP. It is optimized for token-efficient DOM extraction, robust session management, and speed.
Simulate mouse jitter and random delays to scrape protected sites.
fast-browser-use navigate --url "https://protected-site.com" \
--human-emulation \
--wait-for-selector "#content"
Capture the entire DOM state and computed styles for perfect reconstruction later.
fast-browser-use snapshot --include-styles --output state.json
Log in manually once, then steal the session for headless automation.
Step 1: Open non-headless for manual login
fast-browser-use login --url "https://github.com/login" --save-session ./auth.json
Step 2: Reuse session later
fast-browser-use navigate --url "https://github.com/dashboard" --load-session ./auth.json
Extract fresh data from infinite-scroll pages β perfect for harvesting the latest posts, news, or social feeds.
# Harvest headlines from Hacker News (scrolls 3x, waits 800ms between)
fast-browser-use harvest \
--url "https://news.ycombinator.com" \
--selector ".titleline a" \
--scrolls 3 \
--delay 800 \
--output headlines.json
Real output (59 unique items in ~6 seconds):
[
"Genode OS is a tool kit for building highly secure special-purpose OS",
"Mobile carriers can get your GPS location",
"Students using \"humanizer\" programs to beat accusations of cheating with AI",
"Finland to end \"uncontrolled human experiment\" with ban on youth social media",
...
]
Works on any infinite scroll page: Reddit, Twitter, LinkedIn feeds, search results, etc.
Capture any page as PNG:
fast-browser-use screenshot \
--url "https://example.com" \
--output page.png \
--full-page # Optional: capture entire scrollable page
Discover how a site is organized by parsing sitemaps and analyzing page structure.
# Basic sitemap discovery (checks robots.txt + common sitemap URLs)
fast-browser-use sitemap --url "https://example.com"
# Full analysis with page structure (headings, nav, sections)
fast-browser-use sitemap \
--url "https://example.com" \
--analyze-structure \
--max-pages 10 \
--max-sitemaps 5 \
--output site-structure.json
Options:
--analyze-structure: Also extract page structure (headings, nav, sections, meta)--max-pages N: Limit structure analysis to N pages (default: 5)--max-sitemaps N: Limit sitemap parsing to N sitemaps (default: 10, useful for large sites)Example output:
{
"base_url": "https://example.com",
"robots_txt": "User-agent: *\nSitemap: https://example.com/sitemap.xml",
"sitemaps": ["https://example.com/sitemap.xml"],
"pages": [
"https://example.com/about",
"https://example.com/products",
"https://example.com/contact"
],
"page_structures": [
{
"url": "https://example.com",
"title": "Example - Home",
"headings": [
{"level": 1, "text": "Welcome to Example"},
{"level": 2, "text": "Our Services"}
],
"nav_links": [
{"text": "About", "href": "/about"},
{"text": "Products", "href": "/products"}
],
"sections": [
{"tag": "main", "id": "content", "role": "main"},
{"tag": "footer", "id": "footer", "role": null}
],
"main_content": {"tag": "main", "id": "content", "word_count": 450},
"meta": {
"description": "Example company homepage",
"canonical": "https://example.com/"
}
}
]
}
Use this to understand site architecture before scraping, map navigation flows, or audit SEO structure.
| Feature | Fast Browser Use (Rust) | Puppeteer (Node) | Selenium (Java) |
| :--- | :--- | :--- | :--- |
| Startup Time | < 50ms | ~800ms | ~2500ms |
| Memory Footprint | 15 MB | 100 MB+ | 200 MB+ |
| DOM Extract | Zero-Copy | JSON Serialize | Slow Bridge |
This skill is specialized for complex web interactions that require maintaining state (like being logged in), handling dynamic JavaScript content, or managing multiple pages simultaneously. It offers higher performance and control compared to standard fetch-based tools.
Generated Mar 1, 2026
Marketing teams can use the sitemap analyzer and infinite scroll harvester to monitor competitor websites for new product launches, pricing changes, and content updates. This enables real-time market analysis without manual browsing, saving hours of research.
Social media managers can automate the extraction of trending posts from platforms like Twitter or Reddit using the infinite scroll harvester. This helps in curating content, tracking brand mentions, and analyzing engagement trends efficiently.
Retailers can utilize the human emulation feature to bypass bot detection on e-commerce sites and scrape pricing data. This allows for dynamic pricing strategies and inventory management by gathering competitor prices without triggering blocks.
SEO specialists can employ the sitemap analyzer to audit website structures, extract meta tags, and analyze page hierarchies. This aids in optimizing site navigation, improving search engine rankings, and identifying broken links.
QA engineers can use the snapshot and screenshot capabilities to capture DOM states and visual regressions during testing. This ensures consistent user experiences across updates by comparing page structures and styles automatically.
Offer a subscription-based service that uses fast-browser-use to aggregate data from multiple websites, such as news headlines or product listings. Clients pay monthly for access to curated datasets, enabling them to make data-driven decisions without technical overhead.
Provide consulting services to businesses looking to automate web scraping or browser interactions. Develop custom scripts using fast-browser-use for specific use cases like login automation or site monitoring, charging per project or hourly rates.
Integrate fast-browser-use into existing software platforms, such as marketing tools or analytics dashboards, and resell it as an add-on feature. This generates revenue through licensing fees or increased platform subscriptions by enhancing functionality.
π¬ Integration Tip
Ensure CHROME_PATH is correctly set in the environment and use the human emulation feature for sites with bot detection to avoid blocks during automation.
Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with w...
Playwright-based web scraping OpenClaw Skill with anti-bot protection. Successfully tested on complex sites like Discuss.com.hk.
Browser automation and web scraping with Playwright. Forms, screenshots, data extraction. Works standalone or via MCP. Testing included.
Performs deep scraping of complex sites like YouTube using containerized Crawlee, extracting validated, ad-free transcripts and content as JSON output.
Automate web tasks like form filling, data scraping, testing, monitoring, and scheduled jobs with multi-browser support and retry mechanisms.
Web scraping and content comprehension agent β multi-strategy extraction with cascade fallback, news detection, boilerplate removal, structured metadata, and...