A technique where you give the AI a few examples of the task you want it to perform, improving accuracy without any training.
Few-shot learning is giving the AI a few examples of what you want before asking it to do the same task. Instead of describing the task abstractly, you show it: 'Here are 3 customer emails with the correct responses. Now write the response to this 4th email.' The model learns the pattern from examples and applies it — no training needed, just good prompting.
Think of few-shot learning like showing a new employee samples of good work before asking them to do the task themselves. You don't need to write an instruction manual — showing examples communicates nuances that would be hard to describe. The AI picks up on patterns in the examples (format, tone, reasoning) and applies them to your actual request.
Few-shot learning (also called in-context learning) leverages an LLM's ability to generalize from a small number of examples provided in the prompt. This is distinct from traditional few-shot learning in machine learning, which modifies model weights. In LLM few-shot, no gradient updates occur — the model simply conditions its generation on the provided examples. Effectiveness depends on example quality, diversity, ordering (recency bias affects later examples more), and the model's capability. 2-5 examples is often optimal; more can cause confusion or context overuse. Few-shot performance often matches or exceeds zero-shot plus explicit instructions for novel tasks.
'Classify these emails as urgent or non-urgent: Example 1: [email] → Urgent Example 2: [email] → Non-urgent Now classify: [new email]'
Show 2-3 examples of transforming raw notes into structured meeting minutes; the AI matches the format for new notes.
Provide 2-3 samples of a brand's voice, then ask for new content in that voice. Few-shot is often more effective than describing the style.
Show examples of extracting structured data (name, email, company) from unstructured text; the AI applies the pattern to new inputs.
Typically 1-10 examples. 'One-shot' is a single example; 'few-shot' is usually 2-5; beyond that you enter 'many-shot' territory. 2-5 is often the sweet spot — enough to establish the pattern without confusing the model with too many variations. Each additional example costs tokens, so there's a practical upper limit.
Often yes, especially for tasks where the output format matters, the task is subtle, or instructions would be long. Instructions answer 'what to do'; examples show 'what good looks like'. For simple well-understood tasks, instructions suffice. For nuanced tasks (tone, style, structured output), few-shot usually wins. Best: combine clear instructions with 2-3 examples.
Diverse examples help the model understand the boundaries of the pattern. If all examples look alike, the model may over-specialize. Include examples that cover different variations, edge cases, and formats you expect in real inputs. Order matters too — place your most important or diverse examples later, since recency bias emphasizes later examples.
The skill of writing instructions to AI models to get the best possible output.
📚A neural network trained on massive text data to understand and generate human-like language.
🔍A technique that lets AI models look up information before answering, improving accuracy and reducing hallucinations.
⚙️The neural network architecture behind modern AI — introduced by Google in 2017 and now powers ChatGPT, Claude, and most other LLMs.
Our free AI course teaches you to use these ideas in real projects.
Start Free AI Course →