runpodManage RunPod GPU cloud instances - create, start, stop, connect to pods via SSH and API. Use when working with RunPod infrastructure, GPU instances, or need SSH access to remote GPU machines. Handles pod lifecycle, SSH proxy connections, filesystem mounting, and API queries. Requires runpodctl (brew install runpod/runpodctl/runpodctl).
Install via ClawdBot CLI:
clawdbot install andrewharp/runpodManage RunPod GPU cloud instances, SSH connections, and filesystem access.
brew install runpod/runpodctl/runpodctl
runpodctl config --apiKey "your-api-key"
SSH Key: runpodctl manages SSH keys in ~/.runpod/ssh/:
runpodctl ssh add-key
View and manage keys at: https://console.runpod.io/user/settings
Mount script configuration:
The mount script checks ~/.ssh/runpod_key first, then falls back to runpodctl's default key. Override with:
export RUNPOD_SSH_KEY="$HOME/.runpod/ssh/RunPod-Key"
Host keys are stored separately in ~/.runpod/ssh/known_hosts (isolated from your main SSH config). Uses StrictHostKeyChecking=accept-new to verify hosts on reconnect while allowing new RunPod instances.
runpodctl get pod # List pods
runpodctl get pod <id> # Get pod details
runpodctl start pod <id> # Start pod
runpodctl stop pod <id> # Stop pod
runpodctl ssh connect <id> # Get SSH command
runpodctl send <file> # Send file to pod
runpodctl receive <code> # Receive file from pod
# Without volume
runpodctl create pod --name "my-pod" --gpuType "NVIDIA GeForce RTX 4090" --imageName "runpod/pytorch:1.0.2-cu1281-torch280-ubuntu2404"
# With volume (100GB at /workspace)
runpodctl create pod --name "my-pod" --gpuType "NVIDIA GeForce RTX 4090" --imageName "runpod/pytorch:1.0.2-cu1281-torch280-ubuntu2404" --volumeSize 100 --volumePath "/workspace"
Important: When using a volume (--volumeSize), always specify --volumePath too. Without it:
error creating container: ... invalid mount config for type "volume": field Target must not be empty
# Get SSH command
runpodctl ssh connect <pod_id>
# Connect directly (copy command from above)
ssh -p <port> root@<ip> -i ~/.ssh/runpod_key
./scripts/mount_pod.sh <pod_id> [base_dir]
Mounts pod to ~/pods/ by default.
Access files:
ls ~/pods/<pod_id>/
cat ~/pods/<pod_id>/workspace/my-project/train.py
Unmount:
fusermount -u ~/pods/<pod_id>
| Script | Purpose |
|--------|---------|
| mount_pod.sh | Mount pod filesystem via SSHFS (no runpodctl equivalent) |
Proxy URLs:
https://<pod_id>-<port>.proxy.runpod.net
Common ports:
Generated Mar 1, 2026
Researchers use this skill to quickly spin up GPU instances for training machine learning models, such as large language models or computer vision systems. They can manage pod lifecycles, SSH into instances for debugging, and mount filesystems to access data and code locally, streamlining experimentation.
Content creators leverage GPU pods for rendering high-resolution videos or generating AI art with tools like ComfyUI. They start pods on-demand to handle compute-intensive tasks, access web services via proxy URLs, and transfer files efficiently, reducing local hardware requirements.
Data scientists utilize this skill to run Jupyter notebooks or Gradio apps on remote GPU instances for data analysis and model deployment. They can create pods with attached volumes for persistent storage, SSH for command-line access, and mount filesystems to sync project files seamlessly.
Developers use the skill to set up isolated GPU environments for testing applications that require CUDA or other GPU libraries. They manage pods via API, SSH into them for debugging, and mount filesystems to edit code locally, accelerating development cycles.
Educators deploy GPU pods for hands-on training sessions in AI or high-performance computing courses. Students can access pods via SSH or web proxies, work with pre-configured images, and use helper scripts for filesystem access, enabling scalable remote learning environments.
Businesses offer GPU instances on-demand, charging users based on pod runtime and GPU type usage. This model leverages RunPod's infrastructure to provide scalable compute resources, with revenue generated from hourly or minute-based billing for pod creation and management.
Companies build platforms that integrate this skill to offer managed services for AI development, including automated pod provisioning, SSH access, and filesystem mounting. Revenue comes from subscription plans or premium support for streamlined GPU resource management and integration.
Consultants use this skill to help clients set up and optimize RunPod environments for specific projects, such as model training or content creation. Revenue is generated through hourly consulting fees, workshops, and custom script development for pod lifecycle and SSH management.
💬 Integration Tip
Ensure runpodctl is installed and configured with an API key, and set up SSH keys in ~/.runpod/ssh/ for seamless pod access and management.
Automatically update Clawdbot and all installed skills once daily. Runs via cron, checks for updates, applies them, and messages the user with a summary of what changed.
Full desktop computer use for headless Linux servers. Xvfb + XFCE virtual desktop with xdotool automation. 17 actions (click, type, scroll, screenshot, drag,...
Essential Docker commands and workflows for container management, image operations, and debugging.
Tool discovery and shell one-liner reference for sysadmin, DevOps, and security tasks. AUTO-CONSULT this skill when the user is: troubleshooting network issues, debugging processes, analyzing logs, working with SSL/TLS, managing DNS, testing HTTP endpoints, auditing security, working with containers, writing shell scripts, or asks 'what tool should I use for X'. Source: github.com/trimstray/the-book-of-secret-knowledge
Deploy applications and manage projects with complete CLI reference. Commands for deployments, projects, domains, environment variables, and live documentation access.
Monitor topics of interest and proactively alert when important developments occur. Use when user wants automated monitoring of specific subjects (e.g., product releases, price changes, news topics, technology updates). Supports scheduled web searches, AI-powered importance scoring, smart alerts vs weekly digests, and memory-aware contextual summaries.