Few-Shot Learning

LLMs

Teaching a model a new task by giving it a handful of example input/output pairs in the prompt rather than by fine-tuning.


In one line

Show the model a few examples in the prompt and ask it to do the same thing on a new input — no weight updates.

What it actually means

Drop 2–10 solved examples into the prompt before the real query, formatted consistently. The model picks up the pattern from context and applies it. This is in-context learning: the weights don’t change, but the model conditions on the examples as part of its input. GPT-3 is the paper that made this a headline capability. Quality improves rapidly from zero to ~5 examples, then flattens. Ordering matters, format matters, and the examples should be diverse and representative of the task.

Why it matters

Few-shot prompting is often the fastest way to get a baseline working. No training loop, no dataset curation, no GPU — just prompt engineering. For anything beyond prototyping, measure whether few-shot or fine-tuning wins on your task. Few-shot eats context tokens on every call, so at high volume the cost math can tip toward fine-tuning.

Example

Classify sentiment as positive or negative.

Input: I love this laptop.
Output: positive

Input: Battery dies in two hours.
Output: negative

Input: Shipped fast and works great.
Output:

You’ll hear it when

  • Prototyping a classification or extraction task with an LLM.
  • Comparing zero-shot vs few-shot performance.
  • Deciding between prompting and fine-tuning.
  • Reading the GPT-3 paper or any in-context learning work.

Related terms

See also