langchainAvoid common LangChain mistakes — LCEL gotchas, memory persistence, RAG chunking, and output parser traps.
Install via ClawdBot CLI:
clawdbot install ivangdavila/langchainRequires:
| pipes output to next — prompt | llm | parserRunnablePassthrough() forwards input unchanged — use in parallel branchesRunnableParallel runs branches concurrently — {"a": chain1, "b": chain2}.invoke() for single, .batch() for multiple, .stream() for tokens{"question": x} not just x if prompt expects {question}ConversationBufferMemory grows unbounded — use ConversationSummaryMemory for long chatsmemory_key="chat_history" needs {chat_history} in promptreturn_messages=True for chat models — False returns string for completion modelsRecursiveCharacterTextSplitter preserves structure — splits on paragraphs, then sentencesPydanticOutputParser needs format instructions in prompt — call .get_format_instructions()OutputFixingParser retries with LLM — wraps another parser, fixes errorswith_structured_output() on chat models — cleaner than manual parsing for supported modelssimilarity_search returns documents — .page_content for textk parameter controls results count — more isn't always better, noise increasesfilter={"source": "docs"} in most vector storesmax_marginal_relevance_search for diversity — avoids redundant similar chunkshandle_parsing_errors=True — prevents crash on malformed agent outputmax_iterations=10 default may be too low{Question} ≠ {question}ChatPromptTemplate, not PromptTemplateconfig={"callbacks": [...]} through chaintrim_messages or summarization for long historiesGenerated Mar 1, 2026
Building a customer support chatbot that maintains conversation history across sessions and retrieves relevant documentation. Requires proper memory persistence setup and RAG chunking to ensure accurate information retrieval from knowledge bases.
Creating a system that analyzes legal documents, extracts structured information using output parsers, and provides relevant precedents. Requires careful handling of PydanticOutputParser and proper chunking strategies for lengthy legal texts.
Developing a platform where students can ask questions about educational materials and get accurate answers with citations. Needs proper RAG implementation with chunk overlap and metadata filtering to retrieve the most relevant textbook sections.
Building a medical assistant that helps users describe symptoms and provides relevant health information while maintaining conversation context. Requires careful memory management and structured output parsing for safety-critical medical information.
Creating an agent that analyzes financial reports, retrieves market data, and generates investment insights. Requires proper agent configuration with tool descriptions and max iterations to prevent infinite loops during complex analysis.
Offering LangChain-powered automation as a service to enterprises for customer support, document processing, or internal knowledge management. Requires proper error handling and rate limit management for reliable service delivery.
Providing expert implementation services to companies wanting to integrate LangChain into their existing systems. Focuses on avoiding common mistakes like memory persistence issues and output parser failures that clients typically encounter.
Building specialized APIs that handle specific LangChain use cases like document processing or chatbot functionality, abstracting the complexity from end users. Requires robust error handling and proper prompt template management.
💬 Integration Tip
Always validate input keys match prompt template variables exactly, and implement explicit memory persistence rather than relying on session defaults to avoid data loss between deployments.
Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
Gemini CLI for one-shot Q&A, summaries, and generation.
Research any topic from the last 30 days on Reddit + X + Web, synthesize findings, and write copy-paste-ready prompts. Use when the user wants recent social/web research on a topic, asks "what are people saying about X", or wants to learn current best practices. Requires OPENAI_API_KEY and/or XAI_API_KEY for full Reddit+X access, falls back to web search.
Check Antigravity account quotas for Claude and Gemini models. Shows remaining quota and reset times with ban detection.
Manages free AI models from OpenRouter for OpenClaw. Automatically ranks models by quality, configures fallbacks for rate-limit handling, and updates opencla...
Manages free AI models from OpenRouter for OpenClaw. Automatically ranks models by quality, configures fallbacks for rate-limit handling, and updates openclaw.json. Use when the user mentions free AI, OpenRouter, model switching, rate limits, or wants to reduce AI costs.