deepreader-skillThe default web content reader for OpenClaw. Reads X (Twitter), Reddit, YouTube, and any webpage into clean Markdown — zero API keys required. Use when you n...
Install via ClawdBot CLI:
clawdbot install astonysh/deepreader-skillThe default web content reader for OpenClaw agents. Automatically detects URLs in messages, fetches content using specialized parsers, and saves clean Markdown with YAML frontmatter to agent memory.
| Source | Method | API Key? |
|--------|--------|----------|
| Twitter / X | FxTwitter API + Nitter fallback | None |
| Reddit | .json suffix API | None |
| YouTube | youtube-transcript-api | None |
| Any URL | Trafilatura + BeautifulSoup | None |
from deepreader_skill import run
# Automatic — triggered when message contains URLs
result = run("Check this out: https://x.com/user/status/123456")
# Reddit post with comments
result = run("https://www.reddit.com/r/python/comments/abc123/my_post/")
# YouTube transcript
result = run("https://youtube.com/watch?v=dQw4w9WgXcQ")
# Any webpage
result = run("https://example.com/blog/interesting-article")
# Multiple URLs at once
result = run("""
https://x.com/user/status/123456
https://www.reddit.com/r/MachineLearning/comments/xyz789/
https://example.com/article
""")
Content is saved as .md files with structured YAML frontmatter:
---
title: "Tweet by @user"
source_url: "https://x.com/user/status/123456"
domain: "x.com"
parser: "twitter"
ingested_at: "2026-02-16T12:00:00Z"
content_hash: "sha256:..."
word_count: 350
---
| Variable | Default | Description |
|----------|---------|-------------|
| DEEPREEDER_MEMORY_PATH | ../../memory/inbox/ | Where to save ingested content |
| DEEPREEDER_LOG_LEVEL | INFO | Logging verbosity |
URL detected → is Twitter/X? → FxTwitter API → Nitter fallback
→ is Reddit? → .json suffix API
→ is YouTube? → youtube-transcript-api
→ otherwise → Trafilatura (generic)
Triggers automatically when any message contains https:// or http://.
Generated Mar 1, 2026
Marketing teams can use DeepReader to automatically ingest and analyze social media mentions from X (Twitter) and Reddit discussions about their brand. The clean Markdown output with metadata allows for easy sentiment analysis, trend tracking, and competitive intelligence without API key costs.
Researchers and students can batch-process academic articles, documentation pages, and YouTube lecture transcripts into structured Markdown. The automatic URL detection and parsing help compile literature reviews and reference materials efficiently from diverse web sources.
Media companies and content creators can use DeepReader to pull articles, Reddit discussions, and social media posts into a unified Markdown format. This enables automated newsletter creation, content summarization, and multi-source reporting without manual copy-pasting.
Support teams can ingest relevant documentation, forum posts (Reddit), and tutorial videos (YouTube transcripts) to build comprehensive internal knowledge bases. The YAML frontmatter helps organize content by source and date for easy reference during customer interactions.
Business analysts can monitor competitors' social media announcements (X), Reddit community discussions, and blog updates through automated ingestion. The zero-API-key requirement makes it cost-effective for continuous monitoring across multiple sources.
Offer DeepReader as a cloud-based service with enhanced features like API access, team collaboration tools, and advanced analytics dashboards. Charge monthly subscriptions based on usage volume or number of users, targeting businesses needing automated content ingestion.
License DeepReader's core technology to enterprises for embedding into their existing platforms (CMS, CRM, analytics tools). Provide custom integration support, SLA guarantees, and white-labeling options for large organizations with specific workflow needs.
Offer the basic URL-to-Markdown functionality for free to individual users, then monetize through premium features like batch processing limits, historical data access, custom parser development, and priority support. This builds a user base while upselling power users.
💬 Integration Tip
Trigger DeepReader automatically by including URLs in agent messages, and configure the memory path to organize ingested content directly into your agent's knowledge base for immediate recall.
Fetch and read transcripts from YouTube videos. Use when you need to summarize a video, answer questions about its content, or extract information from it.
Fetch and summarize YouTube video transcripts. Use when asked to summarize, transcribe, or extract content from YouTube videos. Handles transcript fetching via residential IP proxy to bypass YouTube's cloud IP blocks.
Browse, search, post, and moderate Reddit. Read-only works without auth; posting/moderation requires OAuth setup.
Interact with Twitter/X — read tweets, search, post, like, retweet, and manage your timeline.
LinkedIn automation via browser relay or cookies for messaging, profile viewing, and network actions.
Search YouTube videos, get channel info, fetch video details and transcripts using YouTube Data API v3 via MCP server or yt-dlp fallback.