notion-clipper-skillClip web pages to Notion. Fetches any URL via Chrome CDP, converts HTML to Markdown, then to Notion blocks, and saves to user-specified Notion database or page. Use when user wants to save/clip a webpage to Notion, or mentions "clip to notion", "save page to notion", "网页剪藏到Notion".
Install via ClawdBot CLI:
clawdbot install EwingYangs/notion-clipper-skillClip any web page to Notion. Uses Chrome CDP for full JavaScript rendering, converts to Markdown, then to Notion blocks.
mkdir -p ~/.config/notion
echo "ntn_your_key_here" > ~/.config/notion/api_key
Dependencies are auto-installed when the script runs. No manual setup needed.
CRITICAL: Always use the command pattern below. It auto-installs dependencies on first run.
SKILL_DIRscripts/; always run lazy install first):(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && npx -y tsx main.ts <args>)
${SKILL_DIR} with the actual path (e.g. /Users/xxx/.claude/skills/notion-clipper-skill)IMPORTANT - Use this command pattern for best results:
# Recommended: Clear proxy env vars and use tsx runtime
(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && unset http_proxy https_proxy all_proxy && npx -y tsx main.ts <url> --database-name "Resources")
Why this pattern?
unset http_proxy https_proxy all_proxy - Avoids ECONNREFUSED from proxy conflictstsx runtime - Node.js runtime that properly handles direct connections (bun has proxy issues)(test -d node_modules || npm install) - Auto-installs dependencies if missingIf you encounter network issues:
# Clip to a Notion database by NAME (recommended - searches for database)
(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && npx -y tsx main.ts <url> --database-name "Resource")
# Clip to a Notion database by ID
(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && npx -y tsx main.ts <url> --database <database_id>)
# Clip to an existing page (appends blocks)
(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && npx -y tsx main.ts <url> --page <page_id>)
# List all accessible databases
(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && npx -y tsx main.ts --list-databases)
# For pages requiring login (wait mode)
(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && npx -y tsx main.ts <url> --database-name "Resource" --wait)
| Option | Description |
|--------|-------------|
| | URL to clip |
| --database-name, -n | Target database by name (searches for match) |
| --database, -d | Target Notion database by ID |
| --page, -p | Target Notion page ID (appends blocks) |
| --list-databases, -l | List all accessible databases and exit |
| --wait, -w | Wait for user signal before capturing |
| --timeout, -t | Page load timeout (default: 30000) |
| --no-bookmark | Don't include bookmark block at top |
| Mode | Behavior | Use When |
|------|----------|----------|
| Auto (default) | Capture on network idle | Public pages, static content |
| Wait (--wait) | User signals when ready | Login-required, lazy loading, paywalls |
Wait mode workflow:
--wait → Chrome opens, script outputs "Press Enter when ready"When saving to a database, creates a new page with:
When appending to a page, adds:
For best results, create a Notion database with these properties:
Clip a tweet to "Resource" database (by name):
(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && unset http_proxy https_proxy all_proxy && npx -y tsx main.ts "https://x.com/dotey/status/123456" -n "Resource")
List all databases first:
(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && unset http_proxy https_proxy all_proxy && npx -y tsx main.ts --list-databases)
Clip article requiring login:
(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && unset http_proxy https_proxy all_proxy && npx -y tsx main.ts "https://medium.com/article" -n "Reading" --wait)
Append to reading notes page:
(cd "${SKILL_DIR}/scripts" && (test -d node_modules || npm install) && unset http_proxy https_proxy all_proxy && npx -y tsx main.ts "https://blog.example.com/post" -p xyz789)
Quick alias (add to your ~/.bashrc or ~/.zshrc):
alias notion-clip='(cd "${SKILL_DIR}/scripts" && unset http_proxy https_proxy all_proxy && npx -y tsx main.ts)'
# Usage: notion-clip <url> -n "Resources"
tsx; Bun may route through proxy and return empty body, use Node)(Other dependencies auto-install on first run.)
| Variable | Description |
|----------|-------------|
| NOTION_CLIPPER_CHROME_PATH | Custom Chrome executable path |
| NOTION_CLIPPER_CHROME_PROFILE_DIR | Custom Chrome profile directory |
| https_proxy / HTTP_PROXY | Proxy for Notion API (e.g. http://127.0.0.1:7890) |
| http_proxy / HTTPS_PROXY | Same as above |
| all_proxy | Optional, e.g. socks5://127.0.0.1:7890 |
Example (proxy on port 7890):
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
| Error | Cause | Solution |
|--------|---------|-----------|
| ECONNREFUSED 208.103.161.1:443 | DNS returns blocked IP; proxy conflict | 1. Close VPN/proxy software
2. Use unset http_proxy https_proxy all_proxy
3. Switch network (e.g., mobile hotspot) |
| Notion API returned empty body (status 200) | Using bun which routes through proxy incorrectly | Run with tsx: npx -y tsx main.ts ... (NOT bun) |
| fetch failed or ECONNREFUSED | Proxy env vars set but Node.js https doesn't support them | Either:
1. Use network without proxy (unset env vars)
2. Ensure proxy allows HTTPS traffic |
| CloudFlare 403 | Direct IP access triggers security protection | Use hostname instead of IP; ensure proper Authorization header |
| Mixed: Sometimes works, sometimes fails | Unstable network or DNS returns different IPs | Script now has 6 retries with exponential backoff (1s, 2s, 4s, 4s...) |
Best Practice: For reliable Notion API access, use a stable network (mobile hotspot often works better than corporate VPN).
| Error | Cause | Solution |
|--------|---------|-----------|
| Invalid URL for link | Notion API rejects non-http(s) URLs | Script now removes all markdown links by default to avoid validation errors. Content is preserved, only links are stripped. |
Note: The script automatically removes these invalid URL types:
javascript:, data:, file:, about: protocolsweixin:, wx://)/path, ./path)#anchor)NOTION_CLIPPER_CHROME_PATH environment variable--timeout value or use --wait mode--wait mode for dynamic/lazy-loaded pagesThe following optimizations have been implemented to handle unstable networks and invalid URLs:
AI Usage Analysis
Analysis is being generated… refresh in a few seconds.
Work with Obsidian vaults (plain Markdown notes) and automate via obsidian-cli.
Create, search, and manage Bear notes via grizzly CLI.
Track water and sleep with JSON file storage
Notion API for creating and managing pages, databases, and blocks.
Smart ClawdBot documentation access with local search index, cached snippets, and on-demand fetch. Token-efficient and freshness-aware.
Work with Obsidian vaults as a knowledge base. Features: fuzzy/phonetic search across all notes, auto-folder detection for new notes, create/read/edit notes with frontmatter, manage tags and wikilinks. Use when: querying knowledge base, saving notes/documents, editing existing notes by user instructions.