LangGraph
A state-machine library from the LangChain team for building controllable, stateful LLM agents as explicit graphs of nodes and edges.
Category
LLM & Agent Frameworks
Difficulty
Intermediate
When to use
You're building an agent with branching logic, retries, checkpoints, or human-in-the-loop steps and want explicit control over the flow.
When not to use
A single LLM call with one tool is enough — LangGraph is overkill for one-shot prompts.
Alternatives
CrewAI AutoGen DSPy Custom state machine
At a glance
| Field | Value |
|---|---|
| Category | Agent / workflow orchestration |
| Difficulty | Intermediate |
| When to use | Stateful agents, branching, human-in-the-loop |
| When not to use | Simple one-shot LLM calls |
| Alternatives | CrewAI, AutoGen, custom state machines |
What it is
LangGraph models an agent as a directed graph. Nodes are functions (or LLM calls) that read and write a typed shared state; edges decide which node runs next based on the state. It supports checkpoints (pause and resume across machines), streaming, and human approval steps. It’s much closer to a workflow engine than the implicit loops of the original LangChain agents.
When we reach for it at Ephizen
- Multi-step research agents with tool use and retry logic.
- Approval flows where a human sits in the loop before a sensitive action runs.
- Long-running workflows that need to survive process restarts via checkpoints.
- Agents that branch on classification (“is this a support ticket or a bug report?”).
Getting started
from langgraph.graph import StateGraph, END
from typing import TypedDict
class State(TypedDict):
question: str
answer: str
def answer_node(state: State) -> State:
return {"answer": f"You asked: {state['question']}"}
graph = StateGraph(State)
graph.add_node("answer", answer_node)
graph.set_entry_point("answer")
graph.add_edge("answer", END)
app = graph.compile()
app.invoke({"question": "hello"})
Gotchas
- The state schema matters — keep it small and serializable or checkpoints get expensive.
- Debugging a large graph is painful without LangSmith tracing; set it up early.
- Infinite loops are possible if edge conditions aren’t exhaustive. Always have a fallback edge to END.
Related tools
- DSPyA framework for programming (not prompting) LLMs — declare signatures and modules, then let an optimizer compile prompts and few-shot examples for you.
- HuggingFace TransformersThe library that made pretrained transformers trivially loadable — from BERT to Llama — with a consistent API across tasks.
- LangChainA Python/JS framework for composing LLM calls, prompts, tools, and memory into end-to-end applications.