Chain of Thought
CoTPrompting an LLM to write out its reasoning step by step before giving a final answer, which usually improves accuracy on multi-step problems.
In one line
Prompting an LLM to write out its reasoning step by step before giving a final answer — usually improves accuracy on multi-step problems.
What it actually means
CoT is a prompting technique: instead of asking “what is 17 * 24?”, you ask “think step by step, then give the answer”. The model writes out intermediate steps in its output, and the final answer tends to be correct more often. It works because the model has more compute (more tokens) to use on the problem and because writing out structure forces it to commit to one branch at a time. Variants include zero-shot CoT (just append “let’s think step by step”), few-shot CoT (show examples of reasoning), and self-consistency (sample several chains, take the majority vote).
Why it matters
CoT was the first widely useful trick for getting LLMs to handle multi-step reasoning, and it kicked off the line of work that led to ReAct, tool use, and modern reasoning models like o1 and o3. Even when newer models do the reasoning internally, the mental model — give the model room to think before it commits — still applies.
Example
Q: A bat and ball cost $1.10. The bat costs $1 more than the ball. How much is the ball?
A: Let's think step by step.
Let ball = x. Then bat = x + 1. Together: x + (x + 1) = 1.10.
2x = 0.10, so x = 0.05. The ball costs $0.05.
You’ll hear it when
- Prompt-engineering a math, logic, or planning task.
- Comparing reasoning models with non-reasoning ones.
- Reading about ReAct, Tree of Thoughts, or self-consistency.
- Debugging an agent that gives wrong answers without showing its work.
- Discussing why “think step by step” still moves benchmarks.