playwrightBrowser automation and web scraping with Playwright. Forms, screenshots, data extraction. Works standalone or via MCP. Testing included.
Install via ClawdBot CLI:
clawdbot install ivangdavila/playwrightUse this skill when you need to:
| Scenario | Method | Speed |
|----------|--------|-------|
| Static HTML | web_fetch tool | ⚡ Fastest |
| JavaScript-rendered | Playwright direct | 🚀 Fast |
| AI agent automation | MCP server | 🤖 Integrated |
| E2E testing | @playwright/test | ✅ Full framework |
| Task | File |
|------|------|
| E2E testing patterns | testing.md |
| CI/CD integration | ci-cd.md |
| Debugging failures | debugging.md |
| Web scraping patterns | scraping.md |
| Selector strategies | selectors.md |
waitForTimeout() - always wait for specific conditions (element, URL, network)browser.close() to prevent memory leaksgetByRole() survives UI changes better than CSSwaitFor() before interacting with elementsstorageState to save and reuse login sessionsconst { chromium } = require('playwright');
const browser = await chromium.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
const text = await page.locator('body').textContent();
await browser.close();
await page.goto('https://example.com/login');
await page.getByLabel('Email').fill('user@example.com');
await page.getByLabel('Password').fill('secret');
await page.getByRole('button', { name: 'Sign in' }).click();
await page.waitForURL('**/dashboard');
await page.goto('https://example.com');
await page.screenshot({ path: 'screenshot.png', fullPage: true });
const rows = await page.locator('table tr').all();
const data = [];
for (const row of rows) {
const cells = await row.locator('td').allTextContents();
data.push(cells);
}
| Priority | Method | Example |
|----------|--------|---------|
| 1 | getByRole() | getByRole('button', { name: 'Submit' }) |
| 2 | getByLabel() | getByLabel('Email') |
| 3 | getByPlaceholder() | getByPlaceholder('Search...') |
| 4 | getByTestId() | getByTestId('submit-btn') |
| 5 | locator() | locator('.class') - last resort |
| Trap | Fix |
|------|-----|
| Element not found | Add await locator.waitFor() before interacting |
| Flaky clicks | Use click({ force: true }) or wait for state: 'visible' |
| Timeout in CI | Increase timeout, check viewport size matches local |
| Auth lost between tests | Use storageState to persist cookies |
| SPA never reaches networkidle | Wait for specific DOM element instead |
| 403 Forbidden | Check if site blocks headless; try headless: false |
| Blank page after load | Increase wait time or use waitUntil: 'networkidle' |
// Save session after login
await page.context().storageState({ path: 'auth.json' });
// Reuse session in new context
const context = await browser.newContext({ storageState: 'auth.json' });
For AI agents using Model Context Protocol:
npm install -g @playwright/mcp
npx @playwright/mcp --headless
| Tool | Description |
|------|-------------|
| browser_navigate | Navigate to URL |
| browser_click | Click element by selector |
| browser_type | Type text into input |
| browser_select_option | Select dropdown option |
| browser_get_text | Get text content |
| browser_evaluate | Execute JavaScript |
| browser_snapshot | Get accessible page snapshot |
| browser_close | Close browser context |
| browser_choose_file | Upload file |
| browser_press | Press keyboard key |
--headless # Run without UI
--browser chromium # chromium|firefox|webkit
--viewport-size 1920x1080
--timeout-action 10000 # Action timeout (ms)
--timeout-navigation 30000
--allowed-hosts example.com,api.example.com
--save-trace # Save trace for debugging
--save-video 1280x720 # Record video
npm init playwright@latest
# Or add to existing project
npm install -D @playwright/test
npx playwright install chromium
Install with clawhub install if user confirms:
puppeteer - Alternative browser automation (Chrome-focused)scrape - General web scraping patterns and strategiesweb - Web development fundamentals and HTTP handlingclawhub star playwrightclawhub syncGenerated Mar 1, 2026
Automate daily scraping of competitor websites to track price changes and product availability for dynamic pricing strategies. Use Playwright to handle JavaScript-rendered pages and extract data from tables or lists, ensuring accurate and timely updates.
Fill out and submit contact forms on multiple business directories or social platforms to generate leads. Leverage Playwright's form automation capabilities with role-based selectors to ensure reliability across different website designs.
Conduct automated testing of web applications, including user flows like login, checkout, and data entry, to ensure functionality and visual consistency. Utilize Playwright's testing framework with specific wait conditions to handle dynamic content and reduce flakiness.
Scrape news articles, headlines, and metadata from various sources to populate a content aggregation platform. Use Playwright to navigate JavaScript-heavy sites, take screenshots for verification, and extract structured data efficiently.
Automate repetitive data entry tasks into web-based systems, such as CRM or ERP platforms, by filling forms and submitting data. Implement session persistence with storageState to maintain login credentials and streamline workflows.
Offer a subscription-based service that provides automated web scraping and browser automation tools to businesses. Use Playwright's MCP integration to enable AI agents for scalable, low-code solutions, targeting industries like e-commerce and market research.
Provide consulting services to help companies implement and optimize end-to-end testing for their web applications using Playwright. Focus on reducing manual testing efforts, improving software quality, and integrating with CI/CD pipelines for continuous delivery.
Collect and sell structured data extracted from websites, such as pricing, product details, or news content, to clients in retail, finance, or research sectors. Leverage Playwright's scraping capabilities to handle dynamic sites and ensure data accuracy and timeliness.
💬 Integration Tip
Integrate Playwright with CI/CD tools like GitHub Actions or Jenkins for automated testing, and use MCP server options to customize browser settings and timeouts for reliable performance in production environments.
Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with w...
Playwright-based web scraping OpenClaw Skill with anti-bot protection. Successfully tested on complex sites like Discuss.com.hk.
Performs deep scraping of complex sites like YouTube using containerized Crawlee, extracting validated, ad-free transcripts and content as JSON output.
Automate web tasks like form filling, data scraping, testing, monitoring, and scheduled jobs with multi-browser support and retry mechanisms.
Web scraping and content comprehension agent — multi-strategy extraction with cascade fallback, news detection, boilerplate removal, structured metadata, and...
Spin up unblocked browser sessions via Browser.cash for web automation. Sessions bypass anti-bot protections (Cloudflare, DataDome, etc.) making them ideal for scraping and automation.