redis-storeUse Redis effectively for caching, queues, and data structures with proper expiration and persistence.
Install via ClawdBot CLI:
clawdbot install ivangdavila/redis-storeSET key value EX 3600SETEX or SET ... EXEXPIRE resets on key update by default—SET removes TTL; use SET ... KEEPTTL (Redis 6+)SCAN with large database: expired keys still show until cleanup cycle runsZADD limits:{user} {now} {request_id} + ZREMRANGEBYSCORE for sliding windowPFADD visitors {ip} uses 12KB for billions of uniquesXADD, XREAD, XACK—better than LIST for reliable queuesHSET user:1 name "Alice" email "a@b.com"—more memory efficient than JSON stringGET then SET is not atomic—another client can modify between; use INCR, SETNX, or LuaSETNX for locks: SET lock:resource {token} NX EX 30—NX = only if not existsWATCH/MULTI/EXEC for optimistic locking—transaction aborts if watched key changedEVAL "script" keys argsXREAD BLOCK + XACK patternappendfsync everysec is good balanceBGSAVE for manual snapshot—doesn't block but forks process, needs memory headroommaxmemory must be set—without it, Redis uses all RAM, then swap = disasterallkeys-lru for cache, volatile-lru for mixed, noeviction for persistent dataINFO memory shows usage—monitor used_memory vs maxmemory{user:1}:profile and {user:1}:sessions go to same slot—use for related keysMGET/MSET—error unless all keys in same slotMOVED redirect: client must follow—use cluster-aware client libraryINCR requests:{ip}:{minute} with EXPIRE—simple fixed windowSET ... NX EX + unique token—verify token on releaseQUIT on shutdown—graceful disconnectKEYS * blocks everything; use SCANmaxmemory—production Redis without limit will crash hostGenerated Mar 1, 2026
Use Redis to cache frequently accessed product data like prices, inventory counts, and descriptions to reduce database load during peak shopping periods. Implement TTL on cache keys to ensure data freshness and prevent memory leaks, while using hashes for efficient storage of product objects.
Leverage Redis HyperLogLog to track unique website visitors and sorted sets for real-time leaderboards or trending metrics. Implement atomic operations with Lua scripts to ensure data consistency while handling high-volume concurrent updates from multiple sources.
Use Redis Streams to create reliable message queues for asynchronous task processing between microservices. Implement consumer groups with XACK for at-least-once delivery, ensuring tasks aren't lost even if workers temporarily disconnect during processing.
Store user session data in Redis with appropriate TTL settings to automatically clean up inactive sessions. Use hash data structures for efficient storage of session attributes and implement clustering with hash tags to keep all session-related data on the same node.
Implement sliding window rate limiting using Redis sorted sets to track API requests per user/IP. Use atomic INCR operations with EXPIRE for simple fixed-window limits, ensuring fair usage while protecting backend systems from abuse and overload.
Offer managed Redis hosting with automated backups, monitoring, and scaling. Provide value-added services like performance tuning, security hardening, and 24/7 support for enterprises that need reliable caching and data layer solutions without operational overhead.
Provide specialized consulting services to optimize existing Redis implementations. Help clients fix common issues like memory leaks, improve data structure usage, implement proper persistence strategies, and design scalable architectures for high-traffic applications.
Build a SaaS platform that uses Redis for real-time data processing and analytics. Offer dashboards, reporting tools, and APIs that leverage Redis's fast in-memory capabilities to provide instant insights from streaming data sources across multiple industries.
đź’¬ Integration Tip
Always set maxmemory and appropriate eviction policies in production, and use connection pooling with pipelining to minimize latency when integrating Redis with applications.
Use the @steipete/oracle CLI to bundle a prompt plus the right files and get a second-model review (API or browser) for debugging, refactors, design checks, or cross-validation.
Manage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database). Use when a user asks Clawdbot to add a task to Things, list inbox/today/upcoming, search tasks, or inspect projects/areas/tags.
Local search/indexing CLI (BM25 + vectors + rerank) with MCP mode.
Use when designing database schemas, writing migrations, optimizing SQL queries, fixing N+1 problems, creating indexes, setting up PostgreSQL, configuring EF Core, implementing caching, partitioning tables, or any database performance question.
Connect to Supabase for database operations, vector search, and storage. Use for storing data, running SQL queries, similarity search with pgvector, and managing tables. Triggers on requests involving databases, vector stores, embeddings, or Supabase specifically.
Query, design, migrate, and optimize SQL databases. Use when working with SQLite, PostgreSQL, or MySQL — schema design, writing queries, creating migrations, indexing, backup/restore, and debugging slow queries. No ORMs required.