last30days-skillResearch a topic from the last 30 days. Also triggered by 'last30'. Sources: Reddit, X, YouTube, web. Become an expert and write copy-paste-ready prompts.
Install via ClawdBot CLI:
clawdbot install johnsonDevops/last30days-skillResearch ANY topic across Reddit, X, YouTube, and the web. Surface what people are actually discussing, recommending, and debating right now.
Before doing anything, parse the user's input for:
Common patterns:
[topic] for [tool] ā "web mockups for Nano Banana Pro" ā TOOL IS SPECIFIED[topic] prompts for [tool] ā "UI design prompts for Midjourney" ā TOOL IS SPECIFIED[topic] ā "iOS design mockups" ā TOOL NOT SPECIFIED, that's OKIMPORTANT: Do NOT ask about target tool before research.
Store these variables:
TOPIC = [extracted topic]TARGET_TOOL = [extracted tool, or "unknown" if not specified]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]DISPLAY your parsing to the user. Before running any tools, output:
I'll research {TOPIC} across Reddit, X, and the web to find what's been discussed in the last 30 days.
Parsed intent:
- TOPIC = {TOPIC}
- TARGET_TOOL = {TARGET_TOOL or "unknown"}
- QUERY_TYPE = {QUERY_TYPE}
Research typically takes 2-8 minutes (niche topics take longer). Starting now.
If TARGET_TOOL is known, mention it in the intro: "...to find {QUERY_TYPE}-style content for use in {TARGET_TOOL}."
This text MUST appear before you call any tools. It confirms to the user that you understood their request.
Step 1: Run the research script (FOREGROUND ā do NOT background this)
CRITICAL: Run this command in the FOREGROUND with a 5-minute timeout. Do NOT use run_in_background. The full output contains Reddit, X, AND YouTube data that you need to read completely.
# Find skill root ā works in repo checkout, Claude Code, or Codex install
for dir in \
"." \
"${CLAUDE_PLUGIN_ROOT:-}" \
"$HOME/.claude/skills/last30days" \
"$HOME/.agents/skills/last30days" \
"$HOME/.codex/skills/last30days"; do
[ -n "$dir" ] && [ -f "$dir/scripts/last30days.py" ] && SKILL_ROOT="$dir" && break
done
if [ -z "${SKILL_ROOT:-}" ]; then
echo "ERROR: Could not find scripts/last30days.py" >&2
exit 1
fi
python3 "${SKILL_ROOT}/scripts/last30days.py" "$ARGUMENTS" --emit=compact
Use a timeout of 300000 (5 minutes) on the Bash call. The script typically takes 1-3 minutes.
The script will automatically:
Read the ENTIRE output. It contains THREE data sections in this order: Reddit items, X items, and YouTube items. If you miss the YouTube section, you will produce incomplete stats.
YouTube items in the output look like: {video_id} (score:N) {channel_name} [N views, N likes] followed by a title, URL, and optional transcript snippet. Count them and include them in your synthesis and stats block.
After the script finishes, do WebSearch to supplement with blogs, tutorials, and news.
For ALL modes, do WebSearch to supplement (or provide all data in web-only mode).
Choose search queries based on QUERY_TYPE:
If RECOMMENDATIONS ("best X", "top X", "what X should I use"):
best {TOPIC} recommendations{TOPIC} list examplesmost popular {TOPIC}If NEWS ("what's happening with X", "X news"):
{TOPIC} news 2026{TOPIC} announcement updateIf PROMPTING ("X prompts", "prompting for X"):
{TOPIC} prompts examples 2026{TOPIC} techniques tipsIf GENERAL (default):
{TOPIC} 2026{TOPIC} discussionFor ALL query types:
Options (passed through from user's command):
--days=N ā Look back N days instead of 30 (e.g., --days=7 for weekly roundup)--quick ā Faster, fewer sources (8-12 each)--deep ā Comprehensive (50-70 Reddit, 40-60 X)After all searches complete, internally synthesize (don't display stats yet):
The Judge Agent must:
Do NOT display stats here - they come at the end, right before the invitation.
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge.
Read the research output carefully. Pay attention to:
ANTI-PATTERN TO AVOID: If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
CRITICAL: Extract SPECIFIC NAMES, not generic patterns.
When user asks "best X" or "top X", they want a LIST of specific things:
BAD synthesis for "best Claude Code skills":
"Skills are powerful. Keep them under 500 lines. Use progressive disclosure."
GOOD synthesis for "best Claude Code skills":
"Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
Identify from the ACTUAL RESEARCH OUTPUT:
Display in this EXACT sequence:
FIRST - What I learned (based on QUERY_TYPE):
If RECOMMENDATIONS - Show specific things mentioned with sources:
š Most mentioned:
[Tool Name] - {n}x mentions
Use Case: [what it does]
Sources: @handle1, @handle2, r/sub, blog.com
[Tool Name] - {n}x mentions
Use Case: [what it does]
Sources: @handle3, r/sub2, Complex
Notable mentions: [other specific things with 1-2 mentions]
CRITICAL for RECOMMENDATIONS:
If PROMPTING/NEWS/GENERAL - Show synthesis and patterns:
CITATION RULE: Cite sources sparingly to prove research is real.
CITATION PRIORITY (most to least preferred):
The tool's value is surfacing what PEOPLE are saying, not what journalists wrote.
When both a web article and an X post cover the same fact, cite the X post.
URL FORMATTING: NEVER paste raw URLs in the output.
Use the publication name, not the URL. The user doesn't need links ā they need clean, readable text.
BAD: "His album is set for March 20 (per Rolling Stone; Billboard; Complex)."
GOOD: "His album BULLY drops March 20 ā fans on X are split on the tracklist, per @honest30bgfan_"
GOOD: "Ye's apology got massive traction on r/hiphopheads"
OK (web, only when Reddit/X don't have it): "The Hellwatt Festival runs July 4-18 at RCF Arena, per Billboard"
Lead with people, not publications. Start each topic with what Reddit/X
users are saying/feeling, then add web context only if needed. The user came
here for the conversation, not the press release.
What I learned:
**{Topic 1}** ā [1-2 sentences about what people are saying, per @handle or r/sub]
**{Topic 2}** ā [1-2 sentences, per @handle or r/sub]
**{Topic 3}** ā [1-2 sentences, per @handle or r/sub]
KEY PATTERNS from the research:
1. [Pattern] ā per @handle
2. [Pattern] ā per r/sub
3. [Pattern] ā per @handle
THEN - Stats (right before invitation):
CRITICAL: Calculate actual totals from the research output.
[Xlikes, Yrt] from each X post, [Xpts, Ycmt] from RedditCopy this EXACTLY, replacing only the {placeholders}:
---
ā
All agents reported back!
āā š Reddit: {N} threads ā {N} upvotes ā {N} comments
āā šµ X: {N} posts ā {N} likes ā {N} reposts
āā š“ YouTube: {N} videos ā {N} views ā {N} with transcripts
āā š Web: {N} pages (supplementary)
āā š£ļø Top voices: @{handle1} ({N} likes), @{handle2} ā r/{sub1}, r/{sub2}
---
If Reddit returned 0 threads, write: "āā š Reddit: 0 threads (no results this cycle)"
If YouTube returned 0 videos or yt-dlp is not installed, omit the YouTube line entirely.
NEVER use plain text dashes (-) or pipe (|). ALWAYS use āā āā ā and the emoji.
SELF-CHECK before displaying: Re-read your "What I learned" section. Does it match what the research ACTUALLY says? If you catch yourself projecting your own knowledge instead of the research, rewrite it.
LAST - Invitation (adapt to QUERY_TYPE):
CRITICAL: Every invitation MUST include 2-3 specific example suggestions based on what you ACTUALLY learned from the research. Don't be generic ā show the user you absorbed the content by referencing real things from the results.
If QUERY_TYPE = PROMPTING:
---
I'm now an expert on {TOPIC} for {TARGET_TOOL}. What do you want to make? For example:
- [specific idea based on popular technique from research]
- [specific idea based on trending style/approach from research]
- [specific idea riffing on what people are actually creating]
Just describe your vision and I'll write a prompt you can paste straight into {TARGET_TOOL}.
If QUERY_TYPE = RECOMMENDATIONS:
---
I'm now an expert on {TOPIC}. Want me to go deeper? For example:
- [Compare specific item A vs item B from the results]
- [Explain why item C is trending right now]
- [Help you get started with item D]
If QUERY_TYPE = NEWS:
---
I'm now an expert on {TOPIC}. Some things you could ask:
- [Specific follow-up question about the biggest story]
- [Question about implications of a key development]
- [Question about what might happen next based on current trajectory]
If QUERY_TYPE = GENERAL:
---
I'm now an expert on {TOPIC}. Some things I can help with:
- [Specific question based on the most discussed aspect]
- [Specific creative/practical application of what you learned]
- [Deeper dive into a pattern or debate from the research]
Example invitations (to show the quality bar):
For /last30days nano banana pro prompts for Gemini:
I'm now an expert on Nano Banana Pro for Gemini. What do you want to make? For example:
- Photorealistic product shots with natural lighting (the most requested style right now)
- Logo designs with embedded text (Gemini's new strength per the research)
- Multi-reference style transfer from a mood board
>
Just describe your vision and I'll write a prompt you can paste straight into Gemini.
For /last30days kanye west (GENERAL):
I'm now an expert on Kanye West. Some things I can help with:
- What's the real story behind the apology letter ā genuine or PR move?
- Break down the BULLY tracklist reactions and what fans are expecting
- Compare how Reddit vs X are reacting to the Bianca narrative
For /last30days war in Iran (NEWS):
I'm now an expert on the Iran situation. Some things you could ask:
- What are the realistic escalation scenarios from here?
- How is this playing differently in US vs international media?
- What's the economic impact on oil markets so far?
After showing the stats summary with your invitation, STOP and wait for the user to respond.
Read their response and match the intent:
Only write a prompt when the user wants one. Don't force a prompt on someone who asked "what could happen next with Iran."
When the user wants a prompt, write a single, highly-tailored prompt using your research expertise.
If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT.
ANTI-PATTERN: Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
Here's your prompt for {TARGET_TOOL}:
---
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS]
---
This uses [brief 1-line explanation of what research insight you applied].
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
After delivering a prompt, offer to write more:
Want another prompt? Just tell me what you're creating next.
For the rest of this conversation, remember:
CRITICAL: After research is complete, you are now an EXPERT on this topic.
When the user asks follow-up questions:
Only do new research if the user explicitly asks about a DIFFERENT topic.
After delivering a prompt, end with:
---
š Expert in: {TOPIC} for {TARGET_TOOL}
š Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} YouTube videos ({sum} views) + {n} web pages
Want another prompt? Just tell me what you're creating next.
What this skill does:
api.openai.com) for Reddit discoveryapi.x.ai) for X searchyt-dlp locally for YouTube search and transcript extraction (no API key, public data)reddit.com for engagement metricsWhat this skill does NOT do:
disable-model-invocation: true)Bundled scripts: scripts/last30days.py (main research engine), scripts/lib/ (search, enrichment, rendering modules), scripts/lib/vendor/bird-search/ (vendored X search client, MIT licensed)
Review scripts before first use to verify behavior.
Generated Mar 1, 2026
Content creators and social media managers use this skill to research trending topics, viral discussions, and audience interests from the past 30 days on platforms like Reddit, X, and YouTube. They generate copy-paste-ready prompts for tools like ChatGPT or Midjourney to create engaging posts, videos, or marketing materials that align with current trends.
Product managers and developers leverage this skill to gather real-time user feedback, identify emerging needs, and analyze competitor discussions from recent web and social media data. They extract recommendations and news to inform feature updates, prioritize development tasks, and stay ahead in fast-moving tech markets like AI tools or project management software.
Researchers and journalists employ this skill to quickly surface current events, debates, and public opinions on topics like climate change or political developments from the last 30 days. They synthesize data from Reddit, X, and web sources to draft reports, articles, or prompts for further investigation, ensuring timely and evidence-based outputs.
Freelancers and consultants use this skill to provide clients with up-to-date insights on niche topics, such as best practices for AI video tools or trends in design mockups. They deliver tailored prompts and recommendations based on recent discussions, enhancing their value proposition for businesses seeking current market intelligence.
Offer monthly subscriptions where clients receive curated reports and prompts generated from last 30 days data on specific topics like tech trends or marketing strategies. Revenue comes from tiered pricing based on report frequency and depth, targeting small businesses and content teams.
Sell custom, copy-paste-ready prompts for AI platforms like Midjourney or ChatGPT, created by researching recent discussions and techniques. Revenue is generated through one-time purchases or bulk packages, appealing to individual creators and agencies needing up-to-date content ideas.
License the skill's underlying technology to other software platforms, such as project management tools or social media schedulers, enabling them to embed trend research features. Revenue streams include licensing fees and usage-based pricing, targeting tech companies seeking to enhance their products with real-time data.
š¬ Integration Tip
Ensure the OPENAI_API_KEY and required binaries (node, python3) are set up in the environment before invocation to avoid runtime errors during script execution.
Fetch and read transcripts from YouTube videos. Use when you need to summarize a video, answer questions about its content, or extract information from it.
Fetch and summarize YouTube video transcripts. Use when asked to summarize, transcribe, or extract content from YouTube videos. Handles transcript fetching via residential IP proxy to bypass YouTube's cloud IP blocks.
Browse, search, post, and moderate Reddit. Read-only works without auth; posting/moderation requires OAuth setup.
Interact with Twitter/X ā read tweets, search, post, like, retweet, and manage your timeline.
LinkedIn automation via browser relay or cookies for messaging, profile viewing, and network actions.
Search YouTube videos, get channel info, fetch video details and transcripts using YouTube Data API v3 via MCP server or yt-dlp fallback.