Few-Shot Prompt
Generator.
Free builder for few-shot prompts. Paste your examples, pick the structure, and ship.
Three formats: classic I/O, XML-tagged (Claude), and discriminative rule-extraction. Works with any model.
Describe what you want
3 prompt variations
Click Copy to use# TASK [describe the task] # EXAMPLES [paste your examples here] # OUTPUT Given a new input, produce the output following the exact pattern above. Example structure: Input -> Output pairs. Selection note: pick diverse examples (cover edge cases).
<task> [describe the task] </task> <examples> [paste your examples] </examples> <instruction>Given a new input in <new_input>, produce the output in the same pattern as the examples.</instruction>
# TASK [describe the task] # EXAMPLES [paste your examples] # INSTRUCTION 1. Study the pattern in the examples. What rule distinguishes the outputs? 2. State the rule in one sentence. 3. Apply the rule to the new input. Number of examples shown: 3 (standard). Selection strategy: pick diverse examples (cover edge cases).
Under the hood
Why worked examples beat rule descriptions.
Rule descriptions are abstract. Worked examples are concrete. The model learns the rule more reliably from three well-chosen input-output pairs than from a paragraph of rule text. Examples are how humans learn too.
Diverse examples force the model to learn the rule. Narrow examples teach format only. Picking the right mix depends on whether you want the model to generalize or to hit a tight pattern.
Plain I/O pairs work everywhere. XML tags are Claude's native format and produce more reliable structure. JSON forms work for structured output. Match structure to model and output type.
Related free tools
Specialized generators for specific tasks.
Zero-Shot Prompt Generator
Instruction-only prompts when examples are not available.
Chain-of-Thought Prompt Generator
Add step-by-step reasoning to your few-shot prompts.
Claude Prompt Generator
XML-tagged prompts tuned for Claude.
All Prompt Generators
35+ free generators across models and techniques.
FAQ
Questions about few-shot prompting.
What is few-shot prompting?+
Few-shot prompting is giving the model 1 to 5 worked examples of the task before asking it to solve a new instance. The examples anchor the output format and the decision rule. It is one of the most reliable accuracy lifts you can apply to any LLM and it costs only a handful of extra tokens.
How many examples should I use?+
Three is the sweet spot for most tasks. One-shot works when the pattern is obvious. Two-shot starts to anchor format but might not cover edge cases. Four or five is worth it for noisy tasks or ambiguous rules. Past five you usually get diminishing returns and the model starts overweighting whichever example it sees last.
How do I pick good examples?+
Diverse examples cover edge cases and force the model to learn the rule, not pattern-match to a single case. Similar examples narrow the domain, which is useful when the task is tight. Progressive difficulty (easy to hard) helps for teaching-style prompts. Adding one negative example (input with the wrong answer, explicitly marked) is sometimes powerful but can confuse weaker models. Start with diverse.
Which example structure works best?+
Plain 'Input: ... Output: ...' pairs work with every model and is the safest default. XML tags (<example>...</example>) are Claude's preferred format and help Claude follow the structure more reliably. Markdown headings are readable but less reliable. Dialogue form works when the task is conversational. JSON is best when the task outputs structured data.
What is discriminative few-shot?+
It is a format where you first ask the model to state the rule that distinguishes the example outputs, and only then apply the rule. This forces the model to make its decision rule explicit, which catches the case where it pattern-matches instead of actually understanding the task. Use it when the examples have a subtle rule you worry the model might miss.
Do examples overfit the model?+
Yes, it can happen. If all your examples share an accidental surface feature (e.g. all start with 'The'), the model might learn the accidental feature instead of the real rule. Diverse example selection mitigates this. If you see the model copying surface patterns, add examples that break those patterns.
Does few-shot work with reasoning models?+
Yes, but the lift is smaller. Reasoning models like o1 and DeepSeek-R1 already decompose the task internally. Examples still help anchor output format, but the accuracy lift is modest compared to base models. For reasoning models, prioritize rule clarity over example volume.
Can I use few-shot with structured JSON output?+
Yes, and it is one of the best use cases. Examples of input paired with correctly-formed JSON output teach the model the schema better than a schema description alone. Pair with response_format: json_object on the OpenAI API or the JSON output modes in Claude and Gemini for cleaner results.