podcast-chaptering-highlightsCreate chapters, highlights, and show notes from podcast audio or transcripts. Use when a user wants chapter markers, highlight clips, or show-note drafts without publishing or distribution actions.
Install via ClawdBot CLI:
clawdbot install codedao12/podcast-chaptering-highlightsGrade Fair — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Generated Mar 20, 2026
A solo podcaster uploads an audio file to generate chapter markers and highlight clips for their latest episode. They use the output to structure their show notes and create social media snippets to promote the episode without needing manual editing.
A company produces internal training podcasts and uses the skill to segment audio into chapters based on different topics. This helps employees easily navigate to relevant sections and allows the team to draft concise summaries for internal documentation.
A university provides transcripts of lecture recordings to create chapters and highlights for online course platforms. This enhances student engagement by breaking down long sessions into digestible segments with key points noted.
A podcast host interviews multiple guests and uses the skill to automatically generate chapter timestamps and highlight clips from the transcript. This streamlines post-production by identifying standout moments for promotional use.
Offer the skill as part of a subscription-based platform for podcasters, charging a monthly fee for access to automated chaptering and highlight generation. Revenue comes from tiered plans based on usage volume or features like advanced templates.
Provide a free basic version with limited features, such as processing short audio files, and upsell to a paid plan for unlimited usage, priority support, and integration with hosting platforms. Revenue is generated from premium upgrades and add-ons.
License the skill to large media companies or educational institutions for internal use, offering custom integrations, enhanced security, and dedicated support. Revenue is based on annual contracts with fees scaled by user count or processing needs.
💬 Integration Tip
Ensure audio files are in supported formats and provide clear metadata like guest names to improve chapter accuracy; use the references for setup guidance.
Scored Apr 19, 2026
Local speech-to-text with the Whisper CLI (no API key).
ElevenLabs text-to-speech with mac-style say UX.
Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
Text-to-speech conversion using node-edge-tts npm package for generating audio from text. Supports multiple voices, languages, speed adjustment, pitch control, and subtitle generation. Use when: (1) User requests audio/voice output with the "tts" trigger or keyword. (2) Content needs to be spoken rather than read (multitasking, accessibility, driving, cooking). (3) User wants a specific voice, speed, pitch, or format for TTS output.
Start voice calls via the OpenClaw voice-call plugin.
Local text-to-speech via sherpa-onnx (offline, no cloud)