LangChain
A Python/JS framework for composing LLM calls, prompts, tools, and memory into end-to-end applications.
Category
LLM & Agent Frameworks
Difficulty
Intermediate
When to use
You need a quick LLM application scaffold with ready-made integrations for vector DBs, document loaders, and LLM providers.
When not to use
You want a small, legible codebase you can fully reason about — LangChain's abstractions get in the way.
Alternatives
LlamaIndex DSPy Haystack Raw SDK calls
At a glance
| Field | Value |
|---|---|
| Category | LLM application framework |
| Difficulty | Intermediate |
| When to use | Prototyping pipelines with many integrations |
| When not to use | Production systems where you want minimal deps |
| Alternatives | LlamaIndex, DSPy, Haystack, raw SDKs |
What it is
LangChain provides primitives — Runnable, ChatPromptTemplate, RetrievalQA, output parsers — and a long list of pre-built integrations for every LLM provider, vector DB, and loader under the sun. The LCEL pipe syntax (prompt | model | parser) composes these into chains.
When we reach for it at Ephizen
- Early prototypes that need to talk to OpenAI, Anthropic, and a vector DB in the same afternoon.
- Reference implementations of standard RAG patterns you don’t want to rewrite.
- Interop with the broader LangChain ecosystem (LangSmith for tracing, LangGraph for agents).
Getting started
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You answer questions from context."),
("human", "Context:\n{context}\n\nQ: {question}"),
])
chain = prompt | ChatOpenAI(model="gpt-4o-mini")
chain.invoke({"context": docs, "question": "What's our refund policy?"})
Gotchas
- LangChain has broken its APIs multiple times. Pin versions aggressively.
- The framework leaks abstractions — debugging a chain often means reading LangChain’s source.
- For serious agent work, LangGraph (same team, state-machine based) is a saner foundation than the original agent classes.
- Many teams eventually strip LangChain out in favor of direct SDK calls once the shape of the app stabilizes.
Related tools
- DSPyA framework for programming (not prompting) LLMs — declare signatures and modules, then let an optimizer compile prompts and few-shot examples for you.
- HuggingFace TransformersThe library that made pretrained transformers trivially loadable — from BERT to Llama — with a consistent API across tasks.
- LangGraphA state-machine library from the LangChain team for building controllable, stateful LLM agents as explicit graphs of nodes and edges.