python-executorExecute Python code in a safe sandboxed environment via [inference.sh](https://inference.sh). Pre-installed: NumPy, Pandas, Matplotlib, requests, BeautifulSo...
Install via ClawdBot CLI:
clawdbot install okaris/python-executorExecute Python code in a safe, sandboxed environment with 100+ pre-installed libraries.
curl -fsSL https://cli.inference.sh | sh && infsh login
# Run Python code
infsh app run infsh/python-executor --input '{
"code": "import pandas as pd\nprint(pd.__version__)"
}'
Install note: The install script only detects your OS/architecture, downloads the matching binary from dist.inference.sh, and verifies its SHA-256 checksum. No elevated permissions or background processes. Manual install & verification available.
| Property | Value |
|----------|-------|
| App ID | infsh/python-executor |
| Environment | Python 3.10, CPU-only |
| RAM | 8GB (default) / 16GB (high_memory) |
| Timeout | 1-300 seconds (default: 30) |
{
"code": "print('Hello World!')",
"timeout": 30,
"capture_output": true,
"working_dir": null
}
requests, httpx, aiohttp - HTTP clientsbeautifulsoup4, lxml - HTML/XML parsingselenium, playwright - Browser automationscrapy - Web scraping frameworknumpy, pandas, scipy - Numerical computingmatplotlib, seaborn, plotly - Visualizationpillow, opencv-python-headless - Image manipulationscikit-image, imageio - Image algorithmsmoviepy - Video editingav (PyAV), ffmpeg-python - Video processingpydub - Audio manipulationtrimesh, open3d - 3D mesh processingnumpy-stl, meshio, pyvista - 3D file formatssvgwrite, cairosvg - SVG creationreportlab, pypdf2 - PDF generationinfsh app run infsh/python-executor --input '{
"code": "import requests\nfrom bs4 import BeautifulSoup\n\nresponse = requests.get(\"https://example.com\")\nsoup = BeautifulSoup(response.content, \"html.parser\")\nprint(soup.find(\"title\").text)"
}'
infsh app run infsh/python-executor --input '{
"code": "import pandas as pd\nimport matplotlib.pyplot as plt\n\ndata = {\"name\": [\"Alice\", \"Bob\"], \"sales\": [100, 150]}\ndf = pd.DataFrame(data)\n\nplt.bar(df[\"name\"], df[\"sales\"])\nplt.savefig(\"outputs/chart.png\")\nprint(\"Chart saved!\")"
}'
infsh app run infsh/python-executor --input '{
"code": "from PIL import Image\nimport numpy as np\n\n# Create gradient image\narr = np.linspace(0, 255, 256*256, dtype=np.uint8).reshape(256, 256)\nimg = Image.fromarray(arr, mode=\"L\")\nimg.save(\"outputs/gradient.png\")\nprint(\"Image created!\")"
}'
infsh app run infsh/python-executor --input '{
"code": "from moviepy.editor import ColorClip, TextClip, CompositeVideoClip\n\nclip = ColorClip(size=(640, 480), color=(0, 100, 200), duration=3)\ntxt = TextClip(\"Hello!\", fontsize=70, color=\"white\").set_position(\"center\").set_duration(3)\nvideo = CompositeVideoClip([clip, txt])\nvideo.write_videofile(\"outputs/hello.mp4\", fps=24)\nprint(\"Video created!\")",
"timeout": 120
}'
infsh app run infsh/python-executor --input '{
"code": "import trimesh\n\nsphere = trimesh.creation.icosphere(subdivisions=3, radius=1.0)\nsphere.export(\"outputs/sphere.stl\")\nprint(f\"Created sphere with {len(sphere.vertices)} vertices\")"
}'
infsh app run infsh/python-executor --input '{
"code": "import requests\nimport json\n\nresponse = requests.get(\"https://api.github.com/users/octocat\")\ndata = response.json()\nprint(json.dumps(data, indent=2))"
}'
Files saved to outputs/ are automatically returned:
# These files will be in the response
plt.savefig('outputs/chart.png')
df.to_csv('outputs/data.csv')
video.write_videofile('outputs/video.mp4')
mesh.export('outputs/model.stl')
# Default (8GB RAM)
infsh app run infsh/python-executor --input input.json
# High memory (16GB RAM) for large datasets
infsh app run infsh/python-executor@high_memory --input input.json
plt.savefig() not plt.show()# AI image generation (for ML-based images)
npx skills add inference-sh/skills@ai-image-generation
# AI video generation (for ML-based videos)
npx skills add inference-sh/skills@ai-video-generation
# LLM models (for text generation)
npx skills add inference-sh/skills@llm-models
Generated Mar 1, 2026
An e-commerce platform uses the skill to scrape competitor websites for pricing and product information, then processes the data with Pandas to adjust their own pricing strategies. This automates market research and ensures competitive pricing without manual data entry.
A marketing agency employs the skill to generate short promotional videos with text overlays and custom graphics using MoviePy and Pillow. This allows for rapid creation of social media content without expensive video editing software, saving time and resources.
An engineering firm utilizes the skill to process 3D models from various formats (e.g., STL, OBJ) using trimesh and open3d for quality checks and conversions. This streamlines workflows in manufacturing or architectural design by automating model preparation tasks.
A financial services company uses the skill to pull data from APIs, analyze it with NumPy and Pandas, and generate PDF reports with ReportLab. This automates monthly reporting processes, reducing errors and freeing up analysts for higher-value work.
A healthcare research team applies the skill to process medical images (e.g., resizing, filtering) with OpenCV and Pillow for preliminary analysis. This supports non-GPU tasks like data preprocessing in clinical studies, enhancing efficiency in research pipelines.
Offer the skill as part of a cloud-based platform where users pay a monthly fee for access to Python execution with pre-installed libraries. This model targets small businesses and developers needing scalable, on-demand code execution without infrastructure management.
Provide a free tier with limited executions and basic features, then charge for higher limits, advanced libraries, or priority support. This attracts hobbyists and startups, converting them to paid plans as their needs grow, ensuring a steady revenue stream.
Sell custom licenses to large organizations for on-premises deployment or dedicated instances with enhanced security and support. This model caters to industries like finance or healthcare with strict compliance requirements, offering high-value contracts.
💬 Integration Tip
Start by testing simple scripts via the CLI to ensure compatibility, then integrate into workflows using the input schema for automated tasks like batch processing or API calls.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Clau...
Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.
Search and analyze your own session logs (older/parent conversations) using jq.
Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linking related objects, enforcing constraints, planning multi-step actions as graph transformations, or when skills need to share state. Trigger on "remember", "what do I know about", "link X to Y", "show dependencies", entity CRUD, or cross-skill data access.
Ultimate AI agent memory system for Cursor, Claude, ChatGPT & Copilot. WAL protocol + vector search + git-notes + cloud backup. Never lose context again. Vibe-coding ready.
Headless browser automation CLI optimized for AI agents with accessibility tree snapshots and ref-based element selection