browser-cashSpin up unblocked browser sessions via Browser.cash for web automation. Sessions bypass anti-bot protections (Cloudflare, DataDome, etc.) making them ideal for scraping and automation.
Install via ClawdBot CLI:
clawdbot install alexander-spring/browser-cashRequires:
Spin up unblocked browser sessions via Browser.cash for web automation. These sessions bypass common anti-bot protections (Cloudflare, DataDome, etc.), making them ideal for scraping, testing, and automation tasks that would otherwise get blocked.
When to use: Any browser automation task—scraping, form filling, testing, screenshots. Browser.cash sessions appear as real browsers and handle bot detection automatically.
API Key is stored in clawdbot config at skills.entries.browser-cash.apiKey.
If not configured, prompt the user:
Get your API key from https://dash.browser.cash and run:
> clawdbot config set skills.entries.browser-cash.apiKey "your_key_here" >
Reading the key:
BROWSER_CASH_KEY=$(clawdbot config get skills.entries.browser-cash.apiKey)
Before first use, check and install Playwright if needed:
if [ ! -d ~/clawd/node_modules/playwright ]; then
cd ~/clawd && npm install playwright puppeteer-core
fi
curl -X POST "https://api.browser.cash/v1/..." \
-H "Authorization: Bearer $BROWSER_CASH_KEY" \
-H "Content-Type: application/json"
Basic session:
curl -X POST "https://api.browser.cash/v1/browser/session" \
-H "Authorization: Bearer $BROWSER_CASH_KEY" \
-H "Content-Type: application/json" \
-d '{}'
Response:
{
"sessionId": "abc123...",
"status": "active",
"servedBy": "node-id",
"createdAt": "2025-01-20T01:51:25.000Z",
"stoppedAt": null,
"cdpUrl": "wss://gcp-usc1-1.browser.cash/v1/consumer/abc123.../devtools/browser/uuid"
}
With options:
curl -X POST "https://api.browser.cash/v1/browser/session" \
-H "Authorization: Bearer $BROWSER_CASH_KEY" \
-H "Content-Type: application/json" \
-d '{
"country": "US",
"windowSize": "1920x1080",
"profile": {
"name": "my-profile",
"persist": true
}
}'
| Option | Type | Description |
|--------|------|-------------|
| country | string | 2-letter ISO code (e.g., "US", "DE", "GB") |
| windowSize | string | Browser dimensions, e.g., "1920x1080" |
| proxyUrl | string | SOCKS5 proxy URL (optional) |
| profile.name | string | Named browser profile for session persistence |
| profile.persist | boolean | Save cookies/storage after session ends |
Browser.cash returns a WebSocket CDP URL (wss://...). Use one of these approaches:
Important: Before running Playwright/Puppeteer scripts, ensure dependencies are installed:
[ -d ~/clawd/node_modules/playwright ] || (cd ~/clawd && npm install playwright puppeteer-core)
Use Playwright or Puppeteer in an exec block to connect directly to the CDP URL:
# 1. Create session
BROWSER_CASH_KEY=$(clawdbot config get skills.entries.browser-cash.apiKey)
SESSION=$(curl -s -X POST "https://api.browser.cash/v1/browser/session" \
-H "Authorization: Bearer $BROWSER_CASH_KEY" \
-H "Content-Type: application/json" \
-d '{"country": "US", "windowSize": "1920x1080"}')
SESSION_ID=$(echo $SESSION | jq -r '.sessionId')
CDP_URL=$(echo $SESSION | jq -r '.cdpUrl')
# 2. Use via Node.js exec (Playwright)
node -e "
const { chromium } = require('playwright');
(async () => {
const browser = await chromium.connectOverCDP('$CDP_URL');
const context = browser.contexts()[0];
const page = context.pages()[0] || await context.newPage();
await page.goto('https://example.com');
console.log('Title:', await page.title());
await browser.close();
})();
"
# 3. Stop session when done
curl -X DELETE "https://api.browser.cash/v1/browser/session?sessionId=$SESSION_ID" \
-H "Authorization: Bearer $BROWSER_CASH_KEY"
For simple tasks, use curl to interact with pages via CDP commands:
# Navigate and extract content using the CDP URL
# (See CDP protocol docs for available methods)
Clawdbot's native browser tool expects HTTP control server URLs, not raw WebSocket CDP. The gateway config.patch approach works when Clawdbot's browser control server proxies the connection. For direct Browser.cash CDP, use the exec approach above.
curl "https://api.browser.cash/v1/browser/session?sessionId=YOUR_SESSION_ID" \
-H "Authorization: Bearer $BROWSER_CASH_KEY"
Statuses: starting, active, completed, error
curl -X DELETE "https://api.browser.cash/v1/browser/session?sessionId=YOUR_SESSION_ID" \
-H "Authorization: Bearer $BROWSER_CASH_KEY"
curl "https://api.browser.cash/v1/browser/sessions?page=1&pageSize=20" \
-H "Authorization: Bearer $BROWSER_CASH_KEY"
Profiles persist cookies, localStorage, and session data across sessions—useful for staying logged in or maintaining state.
List profiles:
curl "https://api.browser.cash/v1/browser/profiles" \
-H "Authorization: Bearer $BROWSER_CASH_KEY"
Delete profile:
curl -X DELETE "https://api.browser.cash/v1/browser/profile?profileName=my-profile" \
-H "Authorization: Bearer $BROWSER_CASH_KEY"
The cdpUrl is a WebSocket endpoint for Chrome DevTools Protocol. Use it with any CDP-compatible library.
Playwright:
const { chromium } = require('playwright');
const browser = await chromium.connectOverCDP(cdpUrl);
const context = browser.contexts()[0];
const page = context.pages()[0] || await context.newPage();
await page.goto('https://example.com');
Puppeteer:
const puppeteer = require('puppeteer-core');
const browser = await puppeteer.connect({ browserWSEndpoint: cdpUrl });
const pages = await browser.pages();
const page = pages[0] || await browser.newPage();
await page.goto('https://example.com');
# 0. Ensure Playwright is installed
[ -d ~/clawd/node_modules/playwright ] || (cd ~/clawd && npm install playwright puppeteer-core)
# 1. Create session
BROWSER_CASH_KEY=$(clawdbot config get skills.entries.browser-cash.apiKey)
SESSION=$(curl -s -X POST "https://api.browser.cash/v1/browser/session" \
-H "Authorization: Bearer $BROWSER_CASH_KEY" \
-H "Content-Type: application/json" \
-d '{"country": "US", "windowSize": "1920x1080"}')
SESSION_ID=$(echo $SESSION | jq -r '.sessionId')
CDP_URL=$(echo $SESSION | jq -r '.cdpUrl')
# 2. Connect with Playwright/Puppeteer using $CDP_URL...
# 3. Stop session when done
curl -X DELETE "https://api.browser.cash/v1/browser/session?sessionId=$SESSION_ID" \
-H "Authorization: Bearer $BROWSER_CASH_KEY"
When extracting data from pages with lazy-loading or infinite scroll:
// Scroll to load all products
async function scrollToBottom(page) {
let previousHeight = 0;
while (true) {
const currentHeight = await page.evaluate(() => document.body.scrollHeight);
if (currentHeight === previousHeight) break;
previousHeight = currentHeight;
await page.evaluate(() => window.scrollTo(0, document.body.scrollHeight));
await page.waitForTimeout(1500); // Wait for content to load
}
}
// Wait for specific elements
await page.waitForSelector('.product-card', { timeout: 10000 });
// Handle "Load More" buttons
const loadMore = await page.$('button.load-more');
if (loadMore) {
await loadMore.click();
await page.waitForTimeout(2000);
}
Common patterns:
await page.waitForLoadState('networkidle')page.waitForSelector() before extracting elements~/clawd/ - install npm dependencies thereGenerated Feb 28, 2026
Automate scraping of competitor pricing data from e-commerce sites that use anti-bot protections like Cloudflare. Browser.cash sessions bypass these blocks, allowing real-time price tracking without detection for market analysis and dynamic pricing strategies.
Collect public data from social media platforms for sentiment analysis or trend monitoring. The skill handles bot detection, enabling automated login, scrolling, and extraction of posts, comments, and metrics without triggering blocks.
Scrape flight, hotel, and rental car prices from travel websites that employ anti-scraping measures. Sessions mimic real browsers to access dynamic content, helping aggregators and agencies gather competitive data for pricing and availability.
Automate retrieval of stock prices, news, or financial reports from protected websites. Browser.cash ensures sessions appear as legitimate users, bypassing protections to support investment analysis, risk assessment, and data-driven decision-making.
Scrape job listings from career sites with anti-bot defenses to analyze hiring trends, skill demands, and salary ranges. The skill enables automated browsing and data collection for recruitment agencies and workforce planning.
Offer a subscription-based service that uses Browser.cash to provide clean, unblocked data feeds to clients. Revenue comes from monthly or annual fees for access to scraped data, such as pricing intelligence or market trends.
Provide custom automation services to businesses needing web scraping or testing. Charge project-based or retainer fees for developing and maintaining scripts that leverage Browser.cash to bypass protections and deliver reliable results.
Resell Browser.cash access bundled with consulting, training, or technical support. Generate revenue through markups on API usage or value-added services like setup assistance and optimization for specific use cases.
💬 Integration Tip
Ensure Node.js and Playwright are installed in the Clawdbot environment before running scripts, and always stop sessions after use to manage costs and resources efficiently.
Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with w...
Playwright-based web scraping OpenClaw Skill with anti-bot protection. Successfully tested on complex sites like Discuss.com.hk.
Browser automation and web scraping with Playwright. Forms, screenshots, data extraction. Works standalone or via MCP. Testing included.
Performs deep scraping of complex sites like YouTube using containerized Crawlee, extracting validated, ad-free transcripts and content as JSON output.
Automate web tasks like form filling, data scraping, testing, monitoring, and scheduled jobs with multi-browser support and retry mechanisms.
Web scraping and content comprehension agent — multi-strategy extraction with cascade fallback, news detection, boilerplate removal, structured metadata, and...