douban-sync-skillExport and sync Douban (豆瓣) book/movie/music/game collections to local CSV files (Obsidian-compatible). Use when the user wants to export their Douban readin...
Install via ClawdBot CLI:
clawdbot install cosformula/douban-sync-skillExport Douban collections (books, movies, music, games) to CSV and keep them in sync via RSS.
Use the browser tool to scrape all collection pages. Requires the user to be logged into Douban.
browser → douban.com/people/{USER_ID}/{category}?start=0&sort=time&mode=list
Categories and URL paths:
book.douban.com/people/{ID}/collect (读过), /do (在读), /wish (想读)movie.douban.com/people/{ID}/collect (看过), /do (在看), /wish (想看)music.douban.com/people/{ID}/collect (听过), /do (在听), /wish (想听)www.douban.com/people/{ID}/games?action=collect (玩过), =do (在玩), =wish (想玩)Each page shows up to 30 items in list mode (some pages may have fewer due to delisted entries). Paginate with ?start=0,30,60... — the script uses the paginator's "next" button to determine whether to continue.
Rate limiting: Wait 2-3 seconds between pages. If blocked, wait 30 seconds and retry.
Scripts:
scripts/douban-scraper.mjs — HTTP-only, no browser needed (may get rate-limited)scripts/douban-browser-scraper.mjs — via Puppeteer CDP, needs a running browserscripts/douban-extract.mjs — generates a browser console script for manual extractionRun scripts/douban-rss-sync.mjs — no login needed.
node scripts/douban-rss-sync.mjs
Setup: Set environment variables:
DOUBAN_USER (required): Douban user IDDOUBAN_OUTPUT_DIR (optional): Output root directory, default ~/douban-syncRecommended: Add a daily cron job for automatic sync.
Four CSV files per user in the output directory:
douban-sync/
└── {user_id}/
├── 书.csv
├── 影视.csv
├── 音乐.csv
└── 游戏.csv
CSV columns:
title,url,date,rating,status,comment
"书名","https://book.douban.com/subject/12345/","2026-01-15","★★★★★","读过","短评内容"
status: 读过/在读/想读, 看过/在看/想看, 听过/在听/想听, 玩过/在玩/想玩Both full export and RSS sync deduplicate by Douban URL — safe to run multiple times.
AI Usage Analysis
Analysis is being generated… refresh in a few seconds.
Use the @steipete/oracle CLI to bundle a prompt plus the right files and get a second-model review (API or browser) for debugging, refactors, design checks, or cross-validation.
Manage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database). Use when a user asks Clawdbot to add a task to Things, list inbox/today/upcoming, search tasks, or inspect projects/areas/tags.
Local search/indexing CLI (BM25 + vectors + rerank) with MCP mode.
Use when designing database schemas, writing migrations, optimizing SQL queries, fixing N+1 problems, creating indexes, setting up PostgreSQL, configuring EF Core, implementing caching, partitioning tables, or any database performance question.
Connect to Supabase for database operations, vector search, and storage. Use for storing data, running SQL queries, similarity search with pgvector, and managing tables. Triggers on requests involving databases, vector stores, embeddings, or Supabase specifically.
Query, design, migrate, and optimize SQL databases. Use when working with SQLite, PostgreSQL, or MySQL — schema design, writing queries, creating migrations, indexing, backup/restore, and debugging slow queries. No ORMs required.