calibre-catalog-readRead Calibre catalog data via calibredb over a Content server, and run one-book analysis workflow that writes HTML analysis block back to comments while caching analysis state in SQLite. Use for list/search/id lookups and AI reading pipeline for a selected book.
Install via ClawdBot CLI:
clawdbot install NEXTAltair/calibre-catalog-readUse this skill for:
list/search/id)export -> analyze -> cache -> comments HTML apply)calibredb available on PATH in the runtime where scripts are executed.ebook-convert available for text extraction.subagent-spawn-command-builder installed (for spawn payload generation).--with-library format:http://HOST:PORT/#LIBRARY_IDHOST:PORT./home/altair/.openclaw/.envCALIBRE_USERNAME=CALIBRE_PASSWORD=--password-env CALIBRE_PASSWORD (username auto-loads from env)--username explicitly.~/.config/calibre-catalog-read/auth.json--save-plain-password unless explicitly requested.List books (JSON):
node skills/calibre-catalog-read/scripts/calibredb_read.mjs list \
--with-library "http://192.168.11.20:8080/#Calibreライブラリ" \
--password-env CALIBRE_PASSWORD \
--limit 50
Search books (JSON):
node skills/calibre-catalog-read/scripts/calibredb_read.mjs search \
--with-library "http://192.168.11.20:8080/#Calibreライブラリ" \
--password-env CALIBRE_PASSWORD \
--query 'series:"中公文庫"'
Get one book by id (JSON):
node skills/calibre-catalog-read/scripts/calibredb_read.mjs id \
--with-library "http://192.168.11.20:8080/#Calibreライブラリ" \
--password-env CALIBRE_PASSWORD \
--book-id 3
Run one-book pipeline (analyze + comments HTML apply + cache):
uv run python skills/calibre-catalog-read/scripts/run_analysis_pipeline.py \
--with-library "http://192.168.11.20:8080/#Calibreライブラリ" \
--password-env CALIBRE_PASSWORD \
--book-id 3 --lang ja
Initialize DB schema:
uv run python skills/calibre-catalog-read/scripts/analysis_db.py init \
--db skills/calibre-catalog-read/state/calibre_analysis.sqlite
Check current hash state:
uv run python skills/calibre-catalog-read/scripts/analysis_db.py status \
--db skills/calibre-catalog-read/state/calibre_analysis.sqlite \
--book-id 3 --format EPUB
Use this split to avoid long blocking turns on chat listeners.
book_id.model, thinking, runTimeoutSeconds.Before first subagent run in a session, confirm once:
modelthinking (low/medium/high)runTimeoutSecondsDo not ask on every run. Reuse the confirmed settings for subsequent books in the same session unless the user asks to change them.
Book-reading analysis is a heavy task. Use a subagent with a lightweight model for analysis generation, then return results to main agent for cache/apply steps.
references/subagent-analysis.prompt.mdreferences/subagent-input.schema.jsonreferences/subagent-analysis.schema.jsonscripts/prepare_subagent_input.mjsRules:
uv run python.references/subagent-analysis.prompt.md) as mandatory base; do not send ad-hoc relaxed read instructions.reason: low_text_requires_confirmation with prompt_en text.lang input.run_analysis_pipeline.py is a local script and does not call OpenClaw tools by itself.
Subagent execution must be orchestrated by the agent layer using sessions_spawn.
Required runtime sequence:
subagent_input.json + chunked source_files from extracted text.
node skills/calibre-catalog-read/scripts/prepare_subagent_input.mjs \
--book-id <id> --title "<title>" --lang ja \
--text-path /tmp/book_<id>.txt --out-dir /tmp/calibre_subagent_<id>
subagent-spawn-command-builder to generate the sessions_spawn payload, then calls sessions_spawn.calibre-read and run-specific analysis task text.source_files and returns analysis JSON (schema-conformant).--analysis-json to run_analysis_pipeline.py for DB/apply.If step 2 is skipped, pipeline falls back to local minimal analysis (only for emergency/testing).
For Discord/chat, always run as two separate turns.
subagent-spawn-command-builder (--profile calibre-read + run-specific --task).sessions_spawn using that payload.runId) via run_state.mjs upsert.Trigger: completion announce/event for that run.
scripts/handle_completion.mjs (get -> apply -> remove, and fail on error).runId is missing, handler returns stale_or_duplicate and does nothing.Hard rule:
For one-book-at-a-time operation, keep a single JSON state file:
skills/calibre-catalog-read/state/runs.jsonUse runId as the primary key (subagent execution id).
Lifecycle:
runId, book_id, title, status: "running", started_atrunId and run apply.status: "failed" + error and keep record for retry/debug.Rules:
Use helper scripts (avoid ad-hoc env var mistakes):
# Turn A: register running task
node skills/calibre-catalog-read/scripts/run_state.mjs upsert \
--state skills/calibre-catalog-read/state/runs.json \
--run-id <RUN_ID> --book-id <BOOK_ID> --title "<TITLE>"
# Turn B: completion handler (preferred)
node skills/calibre-catalog-read/scripts/handle_completion.mjs \
--state skills/calibre-catalog-read/state/runs.json \
--run-id <RUN_ID> \
--analysis-json /tmp/calibre_<BOOK_ID>/analysis.json \
--with-library "http://HOST:PORT/#LIBRARY_ID" \
--password-env CALIBRE_PASSWORD --lang ja
Generated Mar 1, 2026
Researchers use the skill to catalog and analyze digital book collections, extracting key themes and summaries for literature reviews. It automates the process of reading and annotating academic texts, saving time and ensuring comprehensive coverage.
Publishers leverage the skill to analyze book metadata and content for market trends, genre classification, and reader engagement insights. It helps in curating and recommending titles based on AI-driven analysis of text extracts.
Libraries employ the skill to manage and enhance digital catalogs by automatically generating HTML analysis blocks for book comments. It supports search and retrieval operations while maintaining a cache of analysis states for efficient updates.
Book club organizers use the skill to select and analyze books, providing members with structured summaries and discussion points. It streamlines the preparation process by extracting and caching insights from e-books.
Companies integrate the skill to analyze internal documentation and training materials stored in Calibre, generating searchable metadata and summaries. This enhances knowledge discovery and employee training programs.
Offer a cloud-based service where libraries pay a monthly fee to access the skill for catalog management and analysis. Revenue comes from tiered subscriptions based on library size and usage volume.
License the skill to publishing houses as a tool for content analysis and market research. Revenue is generated through one-time licensing fees or annual contracts for enterprise use.
Provide a free basic version for personal use with limited features, and charge for advanced capabilities like bulk analysis or premium support. Revenue comes from upgrades and in-app purchases.
💬 Integration Tip
Ensure all required binaries like calibredb and ebook-convert are installed and configured with proper environment variables, and test connectivity to the Calibre Content server before deployment.
Use the @steipete/oracle CLI to bundle a prompt plus the right files and get a second-model review (API or browser) for debugging, refactors, design checks, or cross-validation.
Manage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database). Use when a user asks Clawdbot to add a task to Things, list inbox/today/upcoming, search tasks, or inspect projects/areas/tags.
Local search/indexing CLI (BM25 + vectors + rerank) with MCP mode.
Use when designing database schemas, writing migrations, optimizing SQL queries, fixing N+1 problems, creating indexes, setting up PostgreSQL, configuring EF Core, implementing caching, partitioning tables, or any database performance question.
Connect to Supabase for database operations, vector search, and storage. Use for storing data, running SQL queries, similarity search with pgvector, and managing tables. Triggers on requests involving databases, vector stores, embeddings, or Supabase specifically.
Query, design, migrate, and optimize SQL databases. Use when working with SQLite, PostgreSQL, or MySQL — schema design, writing queries, creating migrations, indexing, backup/restore, and debugging slow queries. No ORMs required.