Chain-of-Thought
Prompt Generator.
Free builder for chain-of-thought prompts. Classic, self-correcting, and few-shot anchored formats, ready to paste into any model.
The technique that lifted math and logic accuracy by 20 plus points on GPT and Claude. Tuned for you.
Describe what you want
3 prompt variations
Click Copy to use# PROBLEM [state the problem] # INSTRUCTION Think through this step by step. Show your reasoning as a numbered list (1. 2. 3.). Detail target: Standard (5 to 10 steps). # OUTPUT Show the reasoning first, then deliver the final answer on a new line prefixed "Final answer:". Stop when the answer is reached
# PROBLEM [state the problem] # PROTOCOL 1. Think step by step in numbered list (1. 2. 3.) format. Target: Standard (5 to 10 steps). 2. After producing the chain, re-read every step. 3. If any step has a likely error or unsupported leap, flag it and redo from that point. 4. Stop criterion: stop when the answer is reached. # OUTPUT Show the initial chain, the self-check, any corrections, and the final answer.
# EXAMPLES (how to think) Example A. Problem: "If 3 apples cost 6 dollars, how much do 7 cost?" Reasoning: 1. Unit price = 6 / 3 = 2 dollars per apple. 2. 7 apples * 2 dollars = 14 dollars. Final answer: 14 dollars. Example B. Problem: "A bag has 5 red and 3 blue marbles. One is drawn at random. Probability it is blue?" Reasoning: 1. Total marbles = 5 + 3 = 8. 2. Favorable = 3 (blue). 3. Probability = 3 / 8 = 0.375. Final answer: 3/8 or 0.375. # YOUR PROBLEM [state the problem] Reason in the same numbered list (1. 2. 3.) form. Target: Standard (5 to 10 steps). Stop when the answer is reached
Under the hood
Why chain-of-thought prompts work.
Asking for steps forces the model out of pattern-match mode. Instead of recalling a similar problem and guessing, it decomposes and inferences. That is where the accuracy lift comes from.
A single chain commits early and rarely backs up. A self-correcting chain re-reads, flags errors, and redoes affected steps. That closes the gap on problems where the first pass lands close but wrong.
Few-shot examples anchor the reasoning shape. The model mirrors the example structure, which prevents it from skipping steps or drifting into a different format. Cheap to add, big accuracy lift on novel problem types.
Related free tools
Specialized generators for specific tasks.
Reasoning Prompt Generator
Broader reasoning modes beyond linear CoT.
Tree-of-Thought Prompt Generator
Branching exploration when a single chain is not enough.
Few-Shot Prompt Generator
Example-anchored prompts for any task.
DeepSeek Prompt Generator
Reasoning-model prompts tuned for DeepSeek-R1 and V3.
FAQ
Questions about chain-of-thought prompting.
What is chain-of-thought prompting?+
Chain-of-thought (CoT) is a prompting technique where the model shows its intermediate reasoning steps before delivering an answer. The classic trigger is the phrase 'think step by step'. Research from Google and others showed that for math, code, and logic problems, CoT lifts accuracy substantially over direct-answer prompts.
Does chain-of-thought help every kind of problem?+
No. CoT lifts accuracy on multi-step reasoning problems: math, logic, code traces, planning. For one-shot factual lookups ('what year did X happen') it adds noise and sometimes introduces errors. Use CoT when the answer requires inference, skip it when the answer is pure recall.
What is zero-shot CoT vs few-shot CoT?+
Zero-shot CoT is just adding 'Let us think step by step' to the prompt with no examples. Few-shot CoT includes worked examples showing the reasoning pattern you want. Few-shot CoT is stronger on novel problem types where the model might pattern-match to a wrong template. Zero-shot is lighter weight and good enough for familiar problem types.
What does the self-correcting variant add?+
After the initial chain, the model re-reads each step and flags likely errors or unsupported leaps, then redoes from that point. This catches arithmetic slips and logic errors that a single-pass chain misses. It roughly doubles response length but meaningfully improves accuracy on hard problems.
What step style should I pick?+
Numbered list for clear ordered steps. Bullet chain when each point is short. Prose paragraphs for explanation-heavy reasoning. Pseudocode for algorithms and code. Whiteboard narration for teaching or walking someone through the thinking. Numbered list is the safest default.
Can chain-of-thought make the model worse?+
Yes, for simple problems. The model sometimes over-thinks a trivial question, second-guesses a correct instinct, and lands on a worse answer. Reasoning models are especially prone to this. Use the Compact detail level or skip CoT entirely for easy questions.
Does CoT work with reasoning models like o1 or DeepSeek-R1?+
Reasoning models do their own internal chain-of-thought. Adding an explicit CoT prompt on top is redundant and can actually hurt. For those models keep the prompt clean and ask for the final answer. Use this generator for base models (GPT-4o, Claude, Gemini) where explicit CoT still helps.
How long should a chain of thought be?+
As long as the problem requires and no longer. A math problem with 4 variables needs 4 or 5 steps. A system design question might need 10 or 15. Pad past that and the model starts making things up to fill space. Match the detail level setting to the genuine difficulty of the problem.