puppeteerAutomate Chrome and Chromium with Puppeteer for scraping, testing, screenshots, and browser workflows.
Install via ClawdBot CLI:
clawdbot install ivangdavila/puppeteerRequires:
On first use, read setup.md for integration guidelines.
User needs browser automation: web scraping, E2E testing, PDF generation, screenshots, or any headless Chrome task. Agent handles page navigation, element interaction, waiting strategies, and data extraction.
Scripts and outputs in ~/puppeteer/. See memory-template.md for structure.
~/puppeteer/
āāā memory.md # Status + preferences
āāā scripts/ # Reusable automation scripts
āāā output/ # Screenshots, PDFs, scraped data
| Topic | File |
|-------|------|
| Setup process | setup.md |
| Memory template | memory-template.md |
| Selectors guide | selectors.md |
| Waiting patterns | waiting.md |
Never click or type immediately after navigation. Always wait for the element:
await page.waitForSelector('#button');
await page.click('#button');
Clicking without waiting causes "element not found" errors 90% of the time.
Prefer stable selectors in this order:
[data-testid="submit"] ā test attributes (most stable)#unique-id ā IDsform button[type="submit"] ā semantic combinations.class-name ā classes (least stable, changes often)Avoid: div > div > div > button ā breaks on any DOM change.
After clicks that navigate, wait for navigation:
await Promise.all([
page.waitForNavigation(),
page.click('a.next-page')
]);
Without this, the script continues before the new page loads.
Always set viewport for consistent rendering:
await page.setViewport({ width: 1280, height: 800 });
Default viewport is 800x600 ā many sites render differently or show mobile views.
Dismiss dialogs before they block interaction:
page.on('dialog', async dialog => {
await dialog.dismiss(); // or dialog.accept()
});
Unhandled dialogs freeze the script.
Always wrap in try/finally:
const browser = await puppeteer.launch();
try {
// ... automation code
} finally {
await browser.close();
}
Leaked browser processes consume memory and ports.
Add delays between requests to avoid blocks:
await page.waitForTimeout(1000 + Math.random() * 2000);
Hammering sites triggers CAPTCHAs and IP bans.
page.click() on invisible element ā fails silently, use waitForSelector with visible: truepage.evaluate() returns undefined ā cannot return DOM nodes, only serializable dataheadless: 'new' or set user agentpage.waitForNavigation() or data is lostpage.evaluateHandle() to pierce shadow rootsuserDataDir for session persistenceData that stays local:
This skill does NOT:
Install with clawhub install if user confirms:
playwright ā Cross-browser automation alternativechrome ā Chrome DevTools and debuggingweb ā General web developmentclawhub star puppeteerclawhub syncGenerated Mar 1, 2026
Automate daily scraping of competitor product prices and availability from e-commerce websites to update pricing strategies. The skill handles navigation, waiting for dynamic content, and extracting data into structured formats for analysis.
Perform end-to-end testing of web applications by simulating user interactions like form submissions and button clicks to ensure functionality. It includes handling popups, waiting for elements, and generating screenshots for bug reports.
Scrape headlines and article summaries from multiple news websites on a schedule to compile into a centralized feed. The skill manages navigation, respects rate limits to avoid blocks, and outputs data for further processing.
Convert web pages such as reports or documentation into PDF files for offline use or distribution. It sets viewports for consistent rendering and handles page navigation to capture multi-page content accurately.
Automate the collection of public posts or metrics from social media platforms for market research. The skill uses specific selectors to target elements and handles delays to mimic human behavior and avoid detection.
Offer a subscription-based service that uses Puppeteer to provide clients with automated web scraping and data extraction reports. Revenue is generated through monthly fees based on data volume and frequency of updates.
Provide consulting services to businesses needing custom automation scripts for tasks like testing or scraping, using Puppeteer to develop tailored solutions. Revenue comes from project-based contracts and hourly rates.
Develop and distribute a freemium tool that integrates Puppeteer for browser automation, with basic features free and advanced capabilities like scheduling or analytics behind a paywall. Revenue is generated through premium upgrades and enterprise licenses.
š¬ Integration Tip
Ensure Node.js is installed and follow the setup guidelines in setup.md to avoid common errors like element not found or navigation issues.
Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with w...
Playwright-based web scraping OpenClaw Skill with anti-bot protection. Successfully tested on complex sites like Discuss.com.hk.
Browser automation and web scraping with Playwright. Forms, screenshots, data extraction. Works standalone or via MCP. Testing included.
Performs deep scraping of complex sites like YouTube using containerized Crawlee, extracting validated, ad-free transcripts and content as JSON output.
Automate web tasks like form filling, data scraping, testing, monitoring, and scheduled jobs with multi-browser support and retry mechanisms.
Web scraping and content comprehension agent ā multi-strategy extraction with cascade fallback, news detection, boilerplate removal, structured metadata, and...