browser-useAutomates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with w...
Install via ClawdBot CLI:
clawdbot install ShawnPana/browser-useBrowser Use provides cloud browsers and autonomous browser automation via API.
Docs:
API Key is read from clawdbot config at skills.entries.browser-use.apiKey.
If not configured, tell the user:
To use Browser Use, you need an API key. Get one at https://cloud.browser-use.com (new signups get $10 free credit). Then configure it:
> clawdbot config set skills.entries.browser-use.apiKey "bu_your_key_here" >
Base URL: https://api.browser-use.com/api/v2
All requests need header: X-Browser-Use-API-Key:
Spin up cloud browsers for Clawdbot to control directly. Use profiles to persist logins and cookies.
# With profile (recommended - keeps you logged in)
curl -X POST "https://api.browser-use.com/api/v2/browsers" \
-H "X-Browser-Use-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{"profileId": "<profile-uuid>", "timeout": 60}'
# Without profile (fresh browser)
curl -X POST "https://api.browser-use.com/api/v2/browsers" \
-H "X-Browser-Use-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{"timeout": 60}'
Response:
{
"id": "session-uuid",
"cdpUrl": "https://<id>.cdp2.browser-use.com",
"liveUrl": "https://...",
"status": "active"
}
gateway config.patch '{"browser":{"profiles":{"browseruse":{"cdpUrl":"<cdpUrl-from-response>"}}}}'
Now use the browser tool with profile=browseruse to control it.
# List active sessions
curl "https://api.browser-use.com/api/v2/browsers" -H "X-Browser-Use-API-Key: $API_KEY"
# Get session status
curl "https://api.browser-use.com/api/v2/browsers/<session-id>" -H "X-Browser-Use-API-Key: $API_KEY"
# Stop session (unused time is refunded)
curl -X PATCH "https://api.browser-use.com/api/v2/browsers/<session-id>" \
-H "X-Browser-Use-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{"status": "stopped"}'
Pricing: $0.06/hour (Pay As You Go) or $0.03/hour (Business). Max 4 hours per session. Billed per minute, refunded for unused time.
Profiles persist cookies and login state across browser sessions. Create one, log into your accounts in the browser, and reuse it.
# List profiles
curl "https://api.browser-use.com/api/v2/profiles" -H "X-Browser-Use-API-Key: $API_KEY"
# Create profile
curl -X POST "https://api.browser-use.com/api/v2/profiles" \
-H "X-Browser-Use-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{"name": "My Profile"}'
# Delete profile
curl -X DELETE "https://api.browser-use.com/api/v2/profiles/<profile-id>" \
-H "X-Browser-Use-API-Key: $API_KEY"
Tip: You can also sync cookies from your local Chrome using the Browser Use Chrome extension.
Run autonomous browser tasks - like a subagent that handles browser interactions for you. Give it a prompt and it completes the task.
Always use browser-use-llm - optimized for browser tasks, 3-5x faster than other models.
curl -X POST "https://api.browser-use.com/api/v2/tasks" \
-H "X-Browser-Use-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"task": "Go to amazon.com and find the price of the MacBook Air M3",
"llm": "browser-use-llm"
}'
curl "https://api.browser-use.com/api/v2/tasks/<task-id>" -H "X-Browser-Use-API-Key: $API_KEY"
Response:
{
"status": "finished",
"output": "The MacBook Air M3 is priced at $1,099",
"isSuccess": true,
"cost": "0.02"
}
Status values: pending, running, finished, failed, stopped
| Option | Description |
|--------|-------------|
| task | Your prompt (required) |
| llm | Always use browser-use-llm |
| startUrl | Starting page |
| maxSteps | Max actions (default 100) |
| sessionId | Reuse existing session |
| profileId | Use a profile for auth |
| flashMode | Even faster execution |
| vision | Visual understanding |
See references/api.md for all endpoints including Sessions, Files, Skills, and Skills Marketplace.
Generated Feb 26, 2026
Automate daily price checks on competitor websites like Amazon or Walmart to track product pricing changes. Use profiles to stay logged into vendor accounts and tasks to scrape prices efficiently, enabling dynamic pricing strategies.
Schedule and automate posts, comments, and engagement across platforms like Facebook or LinkedIn using persistent login profiles. Tasks can handle content creation and posting, saving time for marketing teams.
Automate login to banking or investment portals to fetch transaction data and portfolio updates. Profiles ensure secure session persistence, while tasks can compile reports for analysis or compliance.
Collect data from academic journals, libraries, or government websites for research projects. Use tasks to navigate complex sites and extract structured data, with profiles to bypass paywalls or login requirements.
Automate responses to common queries by logging into support portals like Zendesk or Salesforce. Tasks can handle ticket updates and data retrieval, improving response times and efficiency.
Charge users per minute of browser session usage or per task execution, with pricing tiers based on volume. Offer free credits to attract new users and upsell to business plans for higher limits and lower rates.
Provide monthly or annual subscriptions with capped usage, priority support, and advanced features like flash mode or vision capabilities. Target businesses needing consistent automation for operational tasks.
License the technology to enterprises or SaaS platforms for integration into their own products, such as CRM tools or data analytics suites. Offer custom branding and dedicated infrastructure.
💬 Integration Tip
Configure the API key in Clawdbot settings first, then test with simple tasks before scaling to ensure cost control and session management.
Playwright-based web scraping OpenClaw Skill with anti-bot protection. Successfully tested on complex sites like Discuss.com.hk.
Browser automation and web scraping with Playwright. Forms, screenshots, data extraction. Works standalone or via MCP. Testing included.
Performs deep scraping of complex sites like YouTube using containerized Crawlee, extracting validated, ad-free transcripts and content as JSON output.
Automate web tasks like form filling, data scraping, testing, monitoring, and scheduled jobs with multi-browser support and retry mechanisms.
Spin up unblocked browser sessions via Browser.cash for web automation. Sessions bypass anti-bot protections (Cloudflare, DataDome, etc.) making them ideal for scraping and automation.
Web scraping and content comprehension agent — multi-strategy extraction with cascade fallback, news detection, boilerplate removal, structured metadata, and...