runpodManage RunPod GPU cloud instances - create, start, stop, connect to pods via SSH and API. Use when working with RunPod infrastructure, GPU instances, or need SSH access to remote GPU machines. Handles pod lifecycle, SSH proxy connections, filesystem mounting, and API queries. Requires runpodctl (brew install runpod/runpodctl/runpodctl).
Install via ClawdBot CLI:
clawdbot install andrewharp/runpodGrade Fair — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Calls external URL not in known-safe list
https://console.runpod.io/user/settingsAudited Apr 16, 2026 · audit v1.0
Generated Mar 1, 2026
Researchers use this skill to quickly spin up GPU instances for training machine learning models, such as large language models or computer vision systems. They can manage pod lifecycles, SSH into instances for debugging, and mount filesystems to access data and code locally, streamlining experimentation.
Content creators leverage GPU pods for rendering high-resolution videos or generating AI art with tools like ComfyUI. They start pods on-demand to handle compute-intensive tasks, access web services via proxy URLs, and transfer files efficiently, reducing local hardware requirements.
Data scientists utilize this skill to run Jupyter notebooks or Gradio apps on remote GPU instances for data analysis and model deployment. They can create pods with attached volumes for persistent storage, SSH for command-line access, and mount filesystems to sync project files seamlessly.
Developers use the skill to set up isolated GPU environments for testing applications that require CUDA or other GPU libraries. They manage pods via API, SSH into them for debugging, and mount filesystems to edit code locally, accelerating development cycles.
Educators deploy GPU pods for hands-on training sessions in AI or high-performance computing courses. Students can access pods via SSH or web proxies, work with pre-configured images, and use helper scripts for filesystem access, enabling scalable remote learning environments.
Businesses offer GPU instances on-demand, charging users based on pod runtime and GPU type usage. This model leverages RunPod's infrastructure to provide scalable compute resources, with revenue generated from hourly or minute-based billing for pod creation and management.
Companies build platforms that integrate this skill to offer managed services for AI development, including automated pod provisioning, SSH access, and filesystem mounting. Revenue comes from subscription plans or premium support for streamlined GPU resource management and integration.
Consultants use this skill to help clients set up and optimize RunPod environments for specific projects, such as model training or content creation. Revenue is generated through hourly consulting fees, workshops, and custom script development for pod lifecycle and SSH management.
💬 Integration Tip
Ensure runpodctl is installed and configured with an API key, and set up SSH keys in ~/.runpod/ssh/ for seamless pod access and management.
Scored Apr 19, 2026
Essential Docker commands and workflows for container management, image operations, and debugging.
Docker containers, images, Compose stacks, networking, volumes, debugging, production hardening, and the commands that keep real environments stable. Use whe...
Define multi-container applications with proper dependency handling, networking, and volume management.
Break any problem down to fundamental truths, then rebuild solutions from atoms up. Use when user says "firstp", "first principles", "from scratch", "what are we assuming", "break this down", "atomic", "fundamental truth", "physics thinking", "Elon method", "bedrock", "ground up", "core problem", "strip away", or challenges assumptions about how things are done.
Chat-based AWS infrastructure assistance using AWS CLI and console context. Use for querying, auditing, and monitoring AWS resources (EC2, S3, IAM, Lambda, ECS/EKS, RDS, CloudWatch, billing, etc.), and for proposing safe changes with explicit confirmation before any write/destructive action.
Create and manage Docker sandboxed VM environments for safe agent execution. Use when running untrusted code, exploring packages, or isolating agent workloads. Supports Claude, Codex, Copilot, Gemini, and Kiro agents with network proxy controls.