xintX Intelligence CLI — search, analyze, and engage on X/Twitter from the terminal. Use when: (1) user says "x research", "search x for", "search twitter for",...
Install via ClawdBot CLI:
clawdbot install 0xNyk/xintGeneral-purpose agentic research over X/Twitter. Decompose any research question into targeted searches, iteratively refine, follow threads, deep-dive linked content, and synthesize into a sourced briefing.
For X API details (endpoints, operators, response format): read references/x-api.md.
This skill requires sensitive credentials. Follow these guidelines:
.env)data/ directory: cache, exports, snapshots, OAuth tokenswatch and stream can send data to webhook endpointshttps:// (http:// is accepted only for localhost/loopback)XINT_WEBHOOK_ALLOWED_HOSTS=hooks.example.com,*.internal.examplemcp --sse) and disabled by default--webhook) and disabled by defaultcurl | bash when possiblebun run xint.ts mcp starts a local MCP server exposing xint commands as tools--sse is explicitly enabled--policy read_only|engagement|moderation and budget guardrailsAll commands run from the project directory:
# Set your environment variables
export X_BEARER_TOKEN="your-token"
bun run xint.ts search "<query>" [options]
Options:
--sort likes|impressions|retweets|recent — sort order (default: likes)--since 1h|3h|12h|1d|7d — time filter (default: last 7 days). Also accepts minutes (30m) or ISO timestamps.--min-likes N — filter by minimum likes--min-impressions N — filter by minimum impressions--pages N — pages to fetch, 1-5 (default: 1, 100 tweets/page)--limit N — max results to display (default: 15)--quick — quick mode: 1 page, max 10 results, auto noise filter, 1hr cache, cost summary--from — shorthand for from:username in query--quality — filter low-engagement tweets (>=10 likes, post-hoc)--no-replies — exclude replies--sentiment — AI-powered per-tweet sentiment analysis (via Grok). Shows positive/negative/neutral/mixed with scores.--save — save results to data/exports/--json — raw JSON output--jsonl — one JSON object per line (optimized for Unix pipes: | jq, | tee)--csv — CSV output for spreadsheet analysis--markdown — markdown output for research docsAuto-adds -is:retweet unless query already includes it. All searches display estimated API cost.
Examples:
bun run xint.ts search "AI agents" --sort likes --limit 10
bun run xint.ts search "from:elonmusk" --sort recent
bun run xint.ts search "(opus 4.6 OR claude) trading" --pages 2 --save
bun run xint.ts search "$BTC (revenue OR fees)" --min-likes 5
bun run xint.ts search "AI agents" --quick
bun run xint.ts search "AI agents" --quality --quick
bun run xint.ts search "solana memecoins" --sentiment --limit 20
bun run xint.ts search "startup funding" --csv > funding.csv
bun run xint.ts search "AI" --jsonl | jq 'select(.metrics.likes > 100)'
bun run xint.ts profile <username> [--count N] [--replies] [--json]
Fetches recent tweets from a specific user (excludes replies by default).
bun run xint.ts thread <tweet_id> [--pages N]
Fetches full conversation thread by root tweet ID.
bun run xint.ts tweet <tweet_id> [--json]
bun run xint.ts article <url> [--json] [--full] [--ai <text>]
Fetches and extracts full article content from any URL using xAI's web_search tool (Grok reads the page). Returns clean text with title, author, date, and word count. Requires XAI_API_KEY.
Also supports X tweet URLs — automatically extracts the linked article from the tweet and fetches it.
Options:
--json — structured JSON output (title, content, author, published, wordCount, ttr)--full — return full article text without truncation (default truncates to ~5000 chars)--model — Grok model (default: grok-4)--ai — analyze article with Grok AI (passes content to analyze command)Examples:
# Fetch article from URL
bun run xint.ts article https://example.com/blog/post
# Auto-extract article from X tweet URL and analyze
bun run xint.ts article "https://x.com/user/status/123456789" --ai "Summarize key takeaways"
# Fetch + analyze with AI
bun run xint.ts article https://techcrunch.com/article --ai "What are the main points?"
# Full content without truncation
bun run xint.ts article https://blog.example.com/deep-dive --full
Agent usage: When search results include tweets with article links, use article to read the full content. Search results now include article titles and descriptions from the X API (shown as 📰 lines), so you can decide which articles are worth a full read. Prioritize articles that:
bun run xint.ts bookmarks [options] # List bookmarked tweets
bun run xint.ts bookmark <tweet_id> # Bookmark a tweet
bun run xint.ts unbookmark <tweet_id> # Remove a bookmark
Bookmark list options:
--limit N — max bookmarks to display (default: 20)--since — filter by recency (1h, 1d, 7d, etc.)--query — client-side text filter--json — raw JSON output--markdown — markdown output--save — save to data/exports/--no-cache — skip cacheRequires OAuth. Run auth setup first.
bun run xint.ts likes [options] # List your liked tweets
bun run xint.ts like <tweet_id> # Like a tweet
bun run xint.ts unlike <tweet_id> # Unlike a tweet
Likes list options: Same as bookmarks (--limit, --since, --query, --json, --no-cache).
Requires OAuth with like.read and like.write scopes.
bun run xint.ts following [username] [--limit N] [--json]
Lists accounts you (or another user) follow. Defaults to the authenticated user.
Requires OAuth with follows.read scope.
bun run xint.ts trends [location] [options]
Fetches trending topics. Tries the official X API trends endpoint first; falls back to search-based hashtag frequency estimation if unavailable.
Options:
[location] — location name or WOEID number (default: worldwide)--limit N — number of trends to display (default: 20)--json — raw JSON output--no-cache — bypass the 15-minute cache--locations — list all known location namesExamples:
bun run xint.ts trends # Worldwide
bun run xint.ts trends us --limit 10 # US top 10
bun run xint.ts trends japan --json # Japan, JSON output
bun run xint.ts trends --locations # List all locations
bun run xint.ts analyze "<query>" # Ask Grok a question
bun run xint.ts analyze --tweets <file> # Analyze tweets from JSON file
bun run xint.ts search "topic" --json | bun run xint.ts analyze --pipe # Pipe search results
Uses xAI's Grok API (OpenAI-compatible). Requires XAI_API_KEY in env or .env.
Options:
--model — grok-3, grok-3-mini (default), grok-2--tweets — path to JSON file containing tweets--pipe — read tweet JSON from stdinExamples:
bun run xint.ts analyze "What are the top AI agent frameworks right now?"
bun run xint.ts search "AI agents" --json | bun run xint.ts analyze --pipe "Which show product launches?"
bun run xint.ts analyze --model grok-3 "Deep analysis of crypto market sentiment"
For “recent sentiment / what X is saying” without using cookies/GraphQL, use xAI’s hosted x_search tool.
Script:
python3 scripts/xai_x_search_scan.py --help
Store first-party artifacts (reports, logs) in xAI Collections and semantic-search them later.
Script:
python3 scripts/xai_collections.py --help
Env:
XAI_API_KEY (api.x.ai): file upload + searchXAI_MANAGEMENT_API_KEY (management-api.x.ai): collections management + attaching documentsNotes:
--dry-run when wiring new cron jobs.bun run xint.ts watch "<query>" [options]
Polls a search query on an interval, shows only new tweets. Great for monitoring topics during catalysts, tracking mentions, or feeding live data into downstream tools.
Options:
--interval / -i — poll interval: 30s, 1m, 5m, 15m (default: 5m)--webhook — POST new tweets as JSON to this URL (https:// required for remote hosts)--jsonl — output as JSONL instead of formatted text (for piping to tee, jq, etc.)--quiet — suppress per-poll headers (just show tweets)--limit N — max tweets to show per poll--sort likes|impressions|retweets|recent — sort orderPress Ctrl+C to stop — prints session stats (duration, total polls, new tweets found, total cost).
Examples:
bun run xint.ts watch "solana memecoins" --interval 5m
bun run xint.ts watch "@vitalikbuterin" --interval 1m
bun run xint.ts watch "AI agents" -i 30s --webhook https://hooks.example.com/ingest
bun run xint.ts watch "breaking news" --jsonl | tee -a feed.jsonl
Agent usage: Use watch when you need continuous monitoring of a topic. For one-off checks, use search instead. The watch command auto-stops if the daily budget is exceeded.
bun run xint.ts diff <@username> [options]
Tracks follower/following changes over time using local snapshots. First run creates a baseline; subsequent runs show who followed/unfollowed since last check.
Options:
--following — track who the user follows (instead of their followers)--history — view all saved snapshots for this user--json — structured JSON output--pages N — pages of followers to fetch (default: 5, 1000 per page)Requires OAuth (auth setup first). Snapshots stored in data/snapshots/.
Examples:
bun run xint.ts diff @vitalikbuterin # First run: create snapshot
bun run xint.ts diff @vitalikbuterin # Later: show changes
bun run xint.ts diff @0xNyk --following # Track who you follow
bun run xint.ts diff @solana --history # View snapshot history
Agent usage: Use diff to detect notable follower changes for monitored accounts. Combine with watch for comprehensive account monitoring. Run periodically (e.g., daily) to build a history of follower changes.
bun run xint.ts report "<topic>" [options]
Generates comprehensive markdown intelligence reports combining search results, optional sentiment analysis, and AI-powered summary via Grok.
Options:
--sentiment — include per-tweet sentiment analysis--accounts @user1,@user2 — include per-account activity sections--model — Grok model for AI summary (default: grok-3-mini)--pages N — search pages to fetch (default: 2)--save — save report to data/exports/Examples:
bun run xint.ts report "AI agents"
bun run xint.ts report "solana" --sentiment --accounts @aaboronkov,@rajgokal --save
bun run xint.ts report "crypto market" --model grok-3 --sentiment --save
Agent usage: Use report when the user wants a comprehensive briefing on a topic. This is the highest-level command — it runs search, sentiment, and analysis in one pass and produces a structured markdown report. For quick pulse checks, use search --quick instead.
bun run xint.ts costs # Today's costs
bun run xint.ts costs week # Last 7 days
bun run xint.ts costs month # Last 30 days
bun run xint.ts costs all # All time
bun run xint.ts costs budget # Show budget info
bun run xint.ts costs budget set 2.00 # Set daily limit to $2
bun run xint.ts costs reset # Reset today's data
Tracks per-call API costs with daily aggregates and configurable budget limits.
bun run xint.ts watchlist # Show all
bun run xint.ts watchlist add <user> [note] # Add account
bun run xint.ts watchlist remove <user> # Remove account
bun run xint.ts watchlist check # Check recent from all
bun run xint.ts auth setup [--manual] # Set up OAuth 2.0 (PKCE)
bun run xint.ts auth status # Check token status
bun run xint.ts auth refresh # Manually refresh tokens
Required scopes: bookmark.read bookmark.write tweet.read users.read like.read like.write follows.read offline.access
bun run xint.ts cache clear # Clear all cached results
15-minute TTL. Avoids re-fetching identical queries.
When doing deep research (not just a quick search), follow this loop:
Turn the research question into 3-5 keyword queries using X search operators:
from: specific known experts(broken OR bug OR issue OR migration)(shipped OR love OR fast OR benchmark)url:github.com or url: specific domains-is:retweet (auto-added), add -is:reply if neededRun each query via CLI. After each, assess:
from: specifically?thread command?When a tweet has high engagement or is a thread starter:
bun run xint.ts thread <tweet_id>
Search results now include article titles and descriptions from the X API (shown as 📰 in output). Use these to decide which links are worth a full read, then fetch with xint article:
bun run xint.ts article <url> # terminal display
bun run xint.ts article <url> --json # structured output
bun run xint.ts article <url> --full # no truncation
Prioritize links that:
For complex research, pipe search results into Grok for synthesis:
bun run xint.ts search "topic" --json | bun run xint.ts analyze --pipe "Summarize themes and sentiment"
Group findings by theme, not by query:
### [Theme/Finding Title]
[1-2 sentence summary]
- @username: "[key quote]" (NL, NI) [Tweet](url)
- @username2: "[another perspective]" (NL, NI) [Tweet](url)
Resources shared:
- [Resource title](url) — [what it is]
Use --save flag to save to data/exports/.
All API calls are tracked in data/api-costs.json. The budget system warns when approaching limits but does not block calls (passive).
X API v2 pay-per-use rates:
Default daily budget: $1.00 (adjustable via costs budget set ).
-is:reply, use --sort likes, narrow keywordsOR, remove restrictive operators-$ -airdrop -giveaway -whitelistfrom: or --min-likes 50has:linksxint/
├── SKILL.md (this file — agent instructions)
├── xint.ts (CLI entry point)
├── lib/
│ ├── api.ts (X API wrapper: search, thread, profile, tweet)
│ ├── article.ts (full article content fetcher via xAI web_search)
│ ├── bookmarks.ts (bookmark read — OAuth)
│ ├── cache.ts (file-based cache, 15min TTL)
│ ├── costs.ts (API cost tracking & budget)
│ ├── engagement.ts (likes, like/unlike, following, bookmark write — OAuth)
│ ├── followers.ts (follower/following tracking + snapshot diffs)
│ ├── format.ts (terminal, markdown, CSV, JSONL formatters)
│ ├── grok.ts (xAI Grok analysis integration)
│ ├── oauth.ts (OAuth 2.0 PKCE auth + token refresh)
│ ├── report.ts (intelligence report generation)
│ ├── sentiment.ts (AI-powered sentiment analysis via Grok)
│ ├── trends.ts (trending topics — API + search fallback)
│ └── watch.ts (real-time monitoring with polling)
├── data/
│ ├── api-costs.json (cost tracking data)
│ ├── oauth-tokens.json (OAuth tokens — chmod 600)
│ ├── watchlist.json (accounts to monitor)
│ ├── exports/ (saved research)
│ ├── snapshots/ (follower/following snapshots for diff)
│ └── cache/ (auto-managed)
└── references/
└── x-api.md (X API endpoint reference)
Generated Mar 1, 2026
Startups can use this skill to monitor X for discussions about new library releases, API changes, or product launches, helping them stay ahead of industry trends. By searching with filters like --since and --min-likes, they can gather real-time feedback from developers and experts, exporting data as CSV for analysis.
Marketing teams can track what people are saying about their brand or competitors on X, using the --sentiment option for AI-powered analysis to gauge public opinion. This enables real-time monitoring with commands like watch, allowing quick responses to positive or negative discourse.
Traders and analysts can search for topics like cryptocurrency or stock trends on X, using queries with operators to filter by time and engagement. Exporting results as JSONL allows for piped analysis with tools like jq to identify influential tweets and market sentiment.
Researchers can analyze X discourse around cultural events or industry drama, using the skill to collect data with --pages and --quality filters for high-engagement content. Exporting to Markdown facilitates the creation of sourced briefings for studies or reports.
Developer communities can use this skill to find what experts think about specific topics, such as new tools or frameworks, by searching with --from and --no-replies. This helps in curating content for newsletters or forums, with cost tracking to manage API usage.
Offer a subscription-based service that leverages this skill to provide clients with automated X monitoring, sentiment reports, and trend analysis. Revenue can be generated through tiered plans based on search frequency, data exports, and AI analysis features.
Provide consulting services using this skill to conduct in-depth X research for clients, helping them understand public perception and engage with their audience. Revenue comes from project-based fees for reports, real-time alerts, and strategic recommendations.
Collect and anonymize X data using this skill's export capabilities, then resell aggregated insights or integrate the data into third-party platforms. Revenue is generated through licensing fees for datasets or API access to enriched social media information.
💬 Integration Tip
Set up environment variables securely in a .env file and use the --quick option for initial testing to minimize costs and cache results.
Fetch and read transcripts from YouTube videos. Use when you need to summarize a video, answer questions about its content, or extract information from it.
Fetch and summarize YouTube video transcripts. Use when asked to summarize, transcribe, or extract content from YouTube videos. Handles transcript fetching via residential IP proxy to bypass YouTube's cloud IP blocks.
Browse, search, post, and moderate Reddit. Read-only works without auth; posting/moderation requires OAuth setup.
Interact with Twitter/X — read tweets, search, post, like, retweet, and manage your timeline.
LinkedIn automation via browser relay or cookies for messaging, profile viewing, and network actions.
Search YouTube videos, get channel info, fetch video details and transcripts using YouTube Data API v3 via MCP server or yt-dlp fallback.