llm-speedtestPing major LLM providers in parallel and compare real API latency. Run with /ping
Install via ClawdBot CLI:
clawdbot install chapati23/llm-speedtestGrade Fair — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Accesses sensitive credential files or environment variables
$ANTHROPICCalls external URL not in known-safe list
https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generatUses known external API (expected, informational)
api.anthropic.comAI Analysis
The skill's external API calls are consistent with its stated purpose of measuring LLM provider latency and use only minimal, non-sensitive prompts ('hi'). The credential access is for legitimate API keys required for the test, not for harvesting. The primary risk is the dependency on an external script (`scripts/ping.sh`) which could be modified to behave maliciously, but the reviewed definition shows no hidden instructions or obfuscation.
Generated Mar 22, 2026
Developers and DevOps teams use this skill to regularly monitor the latency of multiple LLM APIs they integrate into their applications. By running periodic speed tests, they can identify performance degradation or outages, ensuring optimal user experience and quickly switching providers if needed.
Startups and small businesses leverage this skill to compare real-time latency across different LLM providers before committing to a service. This helps them choose the fastest and most reliable option for their use case, such as chatbots or content generation, balancing speed with budget constraints.
Instructors and trainers use this skill in workshops to demonstrate the practical differences in latency between major LLM providers. It provides hands-on experience for students learning about API integration, helping them understand performance metrics and how to optimize AI-driven projects.
QA teams in companies developing AI tools employ this skill to test and validate the response times of integrated LLM APIs during development cycles. It ensures that the tools meet performance benchmarks and can handle real-world usage without delays, improving product reliability.
Freelance developers and consultants use this skill to show clients the latency comparisons of different LLM providers when proposing AI solutions. This builds trust by providing data-driven insights, helping clients make informed decisions based on speed and cost for their specific projects.
A subscription-based service that offers a dashboard for continuous monitoring and reporting of LLM API latencies across providers. It includes alerts for slowdowns, historical data analysis, and recommendations for optimization, targeting businesses that rely heavily on AI integrations.
A consulting firm uses this skill as a tool to audit and optimize clients' AI API setups. They offer tailored services to improve latency, reduce costs, and ensure reliability, charging per project or on a retainer basis for ongoing support and performance tuning.
Creating and selling online courses, tutorials, or toolkits that include this skill for teaching AI API integration and performance testing. Revenue is generated through course sales, licensing fees for educational institutions, or premium content with advanced features.
💬 Integration Tip
Ensure API keys are securely stored using tools like pass or environment variables, and adapt the ping.sh script to match your key management setup for smooth execution.
Scored Apr 19, 2026
Audited Apr 17, 2026 · audit v1.0
Google Maps Grounding Lite MCP for location search, weather, and routes via mcporter.
Implement reliable WebSocket connections with proper reconnection, heartbeats, and scaling.
Perform IP geolocation lookups using ipinfo.io API. Convert IP addresses to geographic data including city, region, country, postal code, timezone, and coordinates. Use when geolocating IPs, enriching IP data, or analyzing geographic distribution.
Operate and troubleshoot BambuLab printers with the bambu-cli (status/watch, print start/pause/resume/stop, files, camera, gcode, AMS, calibration, motion, fans, light, config, doctor). Use when a user asks to control or monitor a BambuLab printer, set up profiles or access codes, or translate a task into safe bambu-cli commands with correct flags, output format, and confirmations.
Manage printers via CUPS on macOS (discover, add, print, queue, status, wake).
Bambu Lab 3D printer control and automation. Activate when user mentions: printer status, 3D printing, slice, analyze model, generate 3D, AMS filament, print...