distributed-inferenceDistributed inference for Llama, Qwen, DeepSeek across heterogeneous hardware. Self-hosted distributed inference — scatter requests across macOS, Linux, Wind...
Install via ClawdBot CLI:
clawdbot install twinsgeeks/distributed-inferenceGrade Fair — based on market validation, documentation quality, package completeness, maintenance status, and authenticity signals.
Sends data to undocumented external endpoint (potential exfiltration)
POST → http://localhost:11435/dashboard/api/pullCalls external URL not in known-safe list
https://github.com/geeks-accelerator/ollama-herdAI Analysis
The skill's architecture involves local network communication (mDNS, HTTP to localhost:11435) for coordinating distributed inference, which aligns with its stated purpose. The external call to GitHub is for repository access, not data exfiltration. No evidence of credential harvesting, hidden instructions, or obfuscation was found.
Audited Apr 17, 2026 · audit v1.0
Generated May 6, 2026
Small and medium enterprises can deploy distributed inference across a cluster of Apple Silicon Macs and Linux servers, using existing hardware to run large models like llama3.3:70b without expensive cloud GPU instances. The automatic node discovery and thermal-aware scheduling optimize performance while minimizing operational overhead.
Organizations can set up local AI inference nodes in remote offices or field locations, with no dependency on high-bandwidth internet or centralized cloud. The coordinator routes requests to the nearest available node, ensuring low-latency responses for customer-facing chatbots or internal knowledge assistants.
Healthcare institutions can run LLMs on-premises across heterogeneous hardware, keeping patient data secure and compliant with regulations. The system's context-aware model placement and fallback chains ensure reliable inference even with varying node availability.
Research labs can pool computational resources from multiple workstations to run large-scale experiments with LLMs. The trace analysis and performance comparison features help researchers optimize model placement and understand inference bottlenecks.
Media companies can use distributed inference to power content generation tools (e.g., summarizing articles, creating drafts) across a fleet of Mac Studios and Linux servers. The adaptive capacity learning schedules heavy inference tasks during off-peak hours based on historical usage patterns.
Offer a subscription service to deploy and manage distributed inference clusters on customer hardware. Includes monitoring, updates, and support. Revenue is recurring monthly or annual fees per node.
Provide a fully managed service where the provider hosts the coordinator and manages node agents on customer premises. Customers pay per inference request or compute consumption, with SLAs for latency and uptime.
Bundle the software with specialized hardware (e.g., pre-configured Mac Mini clusters) for turnkey AI deployment. Revenue from hardware markup plus software license and support contracts.
💬 Integration Tip
Ensure all nodes have Ollama installed and network connectivity via mDNS or explicit URLs. Start with the coordinator on a stable machine, then add node agents one by one to observe adaptive capacity learning.
Scored May 6, 2026
Calendar management and scheduling. Create events, manage meetings, and sync across calendar providers.
Sync and query CalDAV calendars (iCloud, Google, Fastmail, Nextcloud, etc.) using vdirsyncer + khal. Works on Linux.
Manage tasks and projects in Todoist. Use when user asks about tasks, to-dos, reminders, or productivity.
Use when you have a written implementation plan to execute in a separate session with review checkpoints
Send short status descriptions in chat for long-running tasks. Use when you need to provide periodic updates during multi-step operations, confirm task completion, or notify of failures. Includes automated periodic monitoring that sends updates every 5 seconds, status message templates, and a helper function for consistent status reporting.
Create, list, and manage macOS Calendar events via AppleScript. Use when the user asks to add a reminder, schedule an event, create a calendar entry, set a d...