senior-ml-engineerML engineering skill for productionizing models, building MLOps pipelines, and integrating LLMs. Covers model deployment, feature stores, drift monitoring, R...
Install via ClawdBot CLI:
clawdbot install alirezarezvani/senior-ml-engineerGrade Good — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Calls external URL not in known-safe list
http://localhost:8080/healthAudited Apr 16, 2026 · audit v1.0
Generated Mar 1, 2026
A retail company needs to deploy a trained product recommendation model to production. The skill guides them through containerizing the model with Docker, deploying it to a staging environment for integration testing, and then using canary deployment to production with monitoring for latency and error rates. This ensures reliable, low-latency recommendations for users.
A fintech firm wants to automate the retraining and deployment of fraud detection models. Using this skill, they set up a feature store for real-time transaction data, implement drift monitoring to detect changes in data patterns, and configure automated retraining triggers based on performance drops or data drift, ensuring models stay accurate and compliant.
A healthcare research organization builds a retrieval-augmented generation system to answer queries from medical documents. The skill helps select a vector database like Qdrant, implement document chunking strategies, and integrate an LLM with cost tracking, enabling accurate, context-aware responses without hallucinations for clinicians and researchers.
A SaaS company integrates LLMs into their customer support chatbot to handle complex queries. The skill provides guidance on creating a provider abstraction layer for flexibility, implementing retry logic and fallback mechanisms, and tracking costs per request to stay within budget while ensuring high availability and response quality.
A manufacturing plant deploys models to predict equipment failures and needs continuous monitoring. The skill outlines setting up model serving with Triton Inference Server for high throughput, monitoring for drift and performance degradation, and automating alerts to trigger retraining, minimizing downtime and maintenance costs.
Offer a managed platform that automates model deployment, monitoring, and retraining for clients. Revenue is generated through subscription fees based on usage tiers, such as number of models deployed or compute resources consumed, providing scalable infrastructure without upfront investment.
Provide expert consulting services to help businesses implement MLOps pipelines, LLM integrations, or RAG systems. Revenue comes from project-based fees or hourly rates, targeting industries like finance or healthcare that require specialized, compliant solutions.
Host and maintain production models for clients, handling deployment, scaling, and monitoring tasks. Revenue is based on a combination of hosting fees and performance-based pricing, such as charges per inference request or SLA guarantees for uptime and latency.
💬 Integration Tip
Start by containerizing models with the provided Docker template to ensure portability, then integrate with existing CI/CD pipelines for automated testing and deployment.
Scored Apr 18, 2026
Full desktop computer use for headless Linux servers. Xvfb + XFCE virtual desktop with xdotool automation. 17 actions (click, type, scroll, screenshot, drag,...
Kubernetes & OpenShift Platform Agent Swarm — A coordinated multi-agent system for cluster operations. Includes Orchestrator (Jarvis), Cluster Ops (Atlas), G...
Essential SSH commands for secure remote access, key management, tunneling, and file transfers.
Fetch GitHub issues, spawn sub-agents to implement fixes and open PRs, then monitor and address PR review comments. Usage: /gh-issues [owner/repo] [--label b...
Diagnoses common Linux service issues using logs, systemd/PM2, file permissions, Nginx reverse proxy checks, and DNS sanity checks. Use when a server app is failing, unreachable, or misconfigured.
Run a single command on a remote Tailscale node via SSH without opening an interactive session.