DeepSeek Prompt
Generator.
Free reasoning-first prompts for DeepSeek-R1 and DeepSeek-V3. Chain-of-thought, first-principles, and adversarial self-check formats.
Built for math, code, analysis, and multi-step reasoning. Three copy-ready formats per problem.
Describe what you want
3 prompt variations
Click Copy to use# ROLE You are a rigorous reasoner solving a hard problem. Show your work. # PROBLEM [state the problem] # APPROACH Reasoning depth: Deep (multiple reasoning passes, self-check). Format: Numbered reasoning steps + final answer. # VERIFICATION Check each step for consistency Do this AFTER producing your initial answer. If the check finds an error, revise and re-output. # OUTPUT Show the reasoning trace first, then the final answer. Do not skip intermediate steps. If you are unsure at any step, say so explicitly rather than bluffing.
# ROLE You are a first-principles reasoner. Do not pattern-match to remembered solutions. # PROBLEM [state the problem] # INSTRUCTIONS 1. Restate the problem in your own words. 2. Identify the 2-3 core invariants or constraints. 3. Derive the solution from those invariants. 4. Compare your derivation to any well-known approach you recall — note where they agree or diverge. 5. Deliver the final answer. # FORMAT Numbered reasoning steps + final answer
# ROLE You are two reasoners in dialogue: the Solver and the Critic. # PROBLEM [state the problem] # PROCESS 1. Solver proposes an answer with reasoning. 2. Critic attacks the answer — looks for edge cases, missed assumptions, numerical errors. 3. Solver responds to each attack. 4. If any attack stands, Solver revises. 5. Final answer is delivered only after the Critic has no remaining attacks. # OUTPUT Show the full dialogue, then the final answer.
Under the hood
Why reasoning models need different prompts.
Reasoning models have two failure modes: under-thinking (one-shot answers when multi-step is needed) and over-thinking (chain-of-thought on a trivial problem). Explicitly choosing depth prevents both.
A self-check step catches arithmetic slips, missed edge cases, and assumption errors. Without it, reasoning models sound confident even when they're wrong. The generator makes verification a first-class part of the prompt.
Structured output formats (numbered steps, pseudocode, JSON, table) prevent the model from defaulting to narrative answers when you need something more rigorous. Format is a rigor knob, not just a cosmetic choice.
Related free tools
Specialized generators for specific tasks.
Reasoning Prompt Generator
Model-agnostic structured reasoning prompts.
Chain-of-Thought Prompt Generator
Explicit step-by-step thinking scaffolds.
Tree-of-Thought Prompt Generator
Branching exploration, best-path selection.
Copilot Prompt Generator
GitHub Copilot / code-oriented prompts.
FAQ
Questions about DeepSeek prompting.
What is DeepSeek best at?+
DeepSeek-R1 is a reasoning model in the style of OpenAI's o1. It produces strong results on math, code, and multi-step logical problems. For everyday writing tasks, DeepSeek-V3 is faster and less likely to over-think. Match the prompt format to the model: reasoning-heavy prompts work with R1, conversational prompts work with V3.
Why does DeepSeek benefit from first-principles prompting?+
Reasoning models pattern-match to remembered solutions when the problem looks familiar. Explicitly asking the model to derive from invariants — not recall from training — produces more original answers and catches novel edge cases where the remembered solution would fail.
What's the Solver vs Critic format?+
It's a prompt that forces the model into adversarial self-dialogue. The Solver proposes, the Critic attacks, the Solver revises. This catches errors that standard chain-of-thought misses because the Critic is instructed to look specifically for edge cases, missed assumptions, and off-by-one errors.
Should I always use Deep or Exhaustive reasoning depth?+
No. Deeper reasoning is slower and can introduce errors for simple problems (the model over-thinks and second-guesses a correct answer). Use Quick for one-shot questions. Use Standard for most problems. Use Deep when you genuinely need multi-pass verification. Use Exhaustive only for formal proofs or high-stakes derivations.
How do I control DeepSeek's output format?+
The generator lets you pick between plain reasoning, numbered steps, pseudocode, code block, table, and JSON. For code problems, pick Pseudocode + final answer during design, Code block for implementation. For analysis, Table forces the model to be comparative instead of narrative.
Does this generator work with DeepSeek-V3 and DeepSeek-R1?+
Yes. The reasoning prompts work especially well with R1. V3 handles them fine too but sometimes ignores the verification step — if that happens, raise the temperature lower (around 0.3) or use the adversarial format which forces the model to internalize self-criticism.
Can I paste DeepSeek prompts into Claude or GPT?+
Yes. The prompts are model-agnostic structured reasoning scaffolds. Claude's extended thinking mode and GPT-4's chain-of-thought both benefit from the same structure. For Claude-specific XML tagging, use our Claude prompt generator.
Why does the verification step matter?+
Reasoning models hallucinate less than non-reasoning models but still fail at arithmetic, algebraic manipulation, and complex state-tracking. A verification step (re-check each step, reason from first principles, or try a counter-example) catches 30-40% of errors that would otherwise ship as confident-sounding wrong answers.