ReAct
An agent pattern that interleaves reasoning ("think") with tool calls ("act") so the model can observe results and adjust mid-task.
In one line
An agent pattern that interleaves reasoning (“think”) with tool calls (“act”) so the model can observe results and adjust mid-task.
What it actually means
ReAct (Reason + Act) structures the agent loop as alternating thoughts and actions. The model writes a thought (“I should look up the weather”), then an action (search("weather in Berlin tomorrow")), then observes the tool result, then writes the next thought, and so on until it produces a final answer. The reasoning trace gives the model a place to plan and self-correct between steps. Modern tool-calling APIs (OpenAI, Anthropic) bake the same idea into structured function-call/observation cycles.
Why it matters
ReAct is the canonical mental model for an agent that uses tools. Even if you never write the original prompt template, you’ll recognise its shape in every framework — LangGraph, OpenAI Agents SDK, Claude tool use, MCP clients. Knowing the pattern helps you debug agents that loop forever, ignore tool results, or hallucinate observations.
Example
Thought: I need to know the user's account balance before answering.
Action: get_balance(user_id="u_42")
Observation: { "balance_usd": 128.50 }
Thought: That's enough to cover the refund. I can confirm.
Action: final_answer("Yes, your balance covers the refund.")
You’ll hear it when
- Building any tool-using agent.
- Reading the LangChain or LangGraph docs.
- Debugging an agent that hallucinates tool outputs instead of waiting for them.
- Comparing single-shot RAG to agentic retrieval.
- Designing your own action schema for an LLM.