youtube-dataAccess YouTube video data ā transcripts, metadata, channel info, search, and playlists. A lightweight alternative to Google's YouTube Data API with no quota limits. Use when the user needs structured data from YouTube videos, channels, or playlists without dealing with Google API setup, OAuth, or daily quotas.
Install via ClawdBot CLI:
clawdbot install therohitdas/youtube-dataYouTube data access via TranscriptAPI.com ā lightweight alternative to Google's YouTube Data API.
If $TRANSCRIPT_API_KEY is not set, help the user create an account (100 free credits, no card):
Step 1 ā Register: Ask user for their email.
node ./scripts/tapi-auth.js register --email USER_EMAIL
ā OTP sent to email. Ask user: _"Check your email for a 6-digit verification code."_
Step 2 ā Verify: Once user provides the OTP:
node ./scripts/tapi-auth.js verify --token TOKEN_FROM_STEP_1 --otp CODE
API key saved to ~/.openclaw/openclaw.json. See File Writes below for details. Existing file is backed up before modification.
Manual option: transcriptapi.com/signup ā Dashboard ā API Keys.
The verify and save-key commands save the API key to ~/.openclaw/openclaw.json (sets skills.entries.transcriptapi.apiKey and enabled: true). Existing file is backed up to ~/.openclaw/openclaw.json.bak before modification.
To use the API key in terminal/CLI outside the agent, add to your shell profile manually:
export TRANSCRIPT_API_KEY=
Full OpenAPI spec: transcriptapi.com/openapi.json ā consult this for the latest parameters and schemas.
curl -s "https://transcriptapi.com/api/v2/youtube/transcript\
?video_url=VIDEO_URL&format=json&include_timestamp=true&send_metadata=true" \
-H "Authorization: Bearer $TRANSCRIPT_API_KEY"
Response:
{
"video_id": "dQw4w9WgXcQ",
"language": "en",
"transcript": [
{ "text": "We're no strangers to love", "start": 18.0, "duration": 3.5 }
],
"metadata": {
"title": "Rick Astley - Never Gonna Give You Up",
"author_name": "Rick Astley",
"author_url": "https://www.youtube.com/@RickAstley",
"thumbnail_url": "https://i.ytimg.com/vi/dQw4w9WgXcQ/maxresdefault.jpg"
}
}
curl -s "https://transcriptapi.com/api/v2/youtube/search?q=QUERY&type=video&limit=20" \
-H "Authorization: Bearer $TRANSCRIPT_API_KEY"
Video result fields: videoId, title, channelId, channelTitle, channelHandle, channelVerified, lengthText, viewCountText, publishedTimeText, hasCaptions, thumbnails
Channel result fields (type=channel): channelId, title, handle, url, description, subscriberCount, verified, rssUrl, thumbnails
Channel endpoints accept channel ā an @handle, channel URL, or UC... ID. No need to resolve first.
Resolve handle to ID (free):
curl -s "https://transcriptapi.com/api/v2/youtube/channel/resolve?input=@TED" \
-H "Authorization: Bearer $TRANSCRIPT_API_KEY"
Returns: {"channel_id": "UCsT0YIqwnpJCM-mx7-gSA4Q", "resolved_from": "@TED"}
Latest 15 videos with exact stats (free):
curl -s "https://transcriptapi.com/api/v2/youtube/channel/latest?channel=@TED" \
-H "Authorization: Bearer $TRANSCRIPT_API_KEY"
Returns: channel info, results array with videoId, title, published (ISO), viewCount (exact number), description, thumbnail
All channel videos (paginated, 1 credit/page):
curl -s "https://transcriptapi.com/api/v2/youtube/channel/videos?channel=@NASA" \
-H "Authorization: Bearer $TRANSCRIPT_API_KEY"
Returns 100 videos per page + continuation_token for pagination.
Search within channel (1 credit):
curl -s "https://transcriptapi.com/api/v2/youtube/channel/search\
?channel=@TED&q=QUERY&limit=30" \
-H "Authorization: Bearer $TRANSCRIPT_API_KEY"
Accepts playlist ā a YouTube playlist URL or playlist ID.
curl -s "https://transcriptapi.com/api/v2/youtube/playlist/videos?playlist=PL_ID" \
-H "Authorization: Bearer $TRANSCRIPT_API_KEY"
Returns: results (videos), playlist_info (title, numVideos, ownerName, viewCount), continuation_token, has_more
| Endpoint | Cost | Data returned |
| --------------- | -------- | -------------------------- |
| transcript | 1 | Full transcript + metadata |
| search | 1 | Video/channel details |
| channel/resolve | free | Channel ID mapping |
| channel/latest | free | 15 videos + exact stats |
| channel/videos | 1/page | 100 videos per page |
| channel/search | 1 | Videos matching query |
| playlist/videos | 1/page | 100 videos per page |
| Code | Action |
| ---- | -------------------------------------- |
| 402 | No credits ā transcriptapi.com/billing |
| 404 | Not found |
| 408 | Timeout ā retry once |
| 422 | Invalid param format |
Free tier: 100 credits, 300 req/min.
Generated Mar 1, 2026
Marketing teams can extract transcripts and metadata from competitor YouTube videos to analyze messaging, keywords, and engagement trends. This helps identify content gaps and optimize video strategies without API quotas.
Educators and e-learning platforms can gather transcripts from educational channels to create study guides, subtitles, or searchable knowledge bases. Free channel lookups enable easy access to structured video data.
News agencies use this skill to search and retrieve video data from channels for real-time reporting or archival purposes. It supports tracking video trends and verifying content across platforms efficiently.
SEO specialists analyze video transcripts to improve search rankings by extracting keywords and metadata. This aids in optimizing video descriptions and tags for better visibility on YouTube and search engines.
Accessibility providers generate accurate captions and transcripts from videos to support hearing-impaired users. The skill offers reliable data extraction for creating compliant subtitles without complex API setups.
Offer 100 free credits to attract users, then charge for additional credits or premium features like higher request limits. This model encourages adoption while monetizing heavy usage through tiered subscription plans.
License the YouTube data access to other businesses, such as analytics platforms or content creators, who integrate it into their own tools. This generates recurring revenue through licensing agreements and custom integrations.
Develop a dashboard that aggregates YouTube data for enterprises, providing insights on video performance and competitor analysis. Revenue comes from enterprise subscriptions and advanced reporting features.
š¬ Integration Tip
Set up the TRANSCRIPT_API_KEY environment variable and use the provided scripts for easy authentication; refer to the OpenAPI spec for detailed endpoint parameters.
Use the @steipete/oracle CLI to bundle a prompt plus the right files and get a second-model review (API or browser) for debugging, refactors, design checks, or cross-validation.
Manage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database). Use when a user asks Clawdbot to add a task to Things, list inbox/today/upcoming, search tasks, or inspect projects/areas/tags.
Local search/indexing CLI (BM25 + vectors + rerank) with MCP mode.
Use when designing database schemas, writing migrations, optimizing SQL queries, fixing N+1 problems, creating indexes, setting up PostgreSQL, configuring EF Core, implementing caching, partitioning tables, or any database performance question.
Connect to Supabase for database operations, vector search, and storage. Use for storing data, running SQL queries, similarity search with pgvector, and managing tables. Triggers on requests involving databases, vector stores, embeddings, or Supabase specifically.
Query, design, migrate, and optimize SQL databases. Use when working with SQLite, PostgreSQL, or MySQL ā schema design, writing queries, creating migrations, indexing, backup/restore, and debugging slow queries. No ORMs required.