Reasoning Prompt
Generator.
Free builder for model-agnostic structured reasoning prompts. Pick the mode, set rigor, and ship.
Six reasoning modes, four rigor levels, three output formats. Works with any model.
Describe what you want
3 prompt variations
Click Copy to use# QUESTION [state the question] # REASONING APPROACH Mode: Deductive (rules to conclusions). Rigor: Standard (clear steps and justification). # OUTPUT Show reasoning then answer Name the assumptions you rely on. If any step feels uncertain, say so rather than bluffing.
# ROLE You are a rigorous reasoner. Do not skip steps and do not paper over uncertainty. # QUESTION [state the question] # PROTOCOL 1. Restate the question in your own words to confirm interpretation. 2. List the premises you will use, flagging which are given vs assumed. 3. Apply deductive (rules to conclusions) to derive the answer. 4. Surface any counterexample or edge case that could break the conclusion. 5. State the conclusion with a confidence level (low, medium, high) and why. # OUTPUT FORMAT Show reasoning then answer
# ROLE You are two reasoners. Proponent builds the strongest case. Skeptic stress-tests it. # QUESTION [state the question] # PROCESS 1. Proponent: deliver the best deductive (rules to conclusions) argument for the answer. Include premises and inference rules. 2. Skeptic: challenge each premise and each inference. Flag hidden assumptions and brittle steps. 3. Proponent: respond to each challenge. Revise the argument if a challenge stands. 4. Resolution: present the refined conclusion with calibrated confidence. Rigor target: Standard (clear steps and justification). # OUTPUT Show the full exchange followed by the final answer.
Under the hood
Why structured reasoning prompts beat free-form asks.
Naming the reasoning mode (deductive, abductive, causal) pins the model to one approach. Without it the model drifts between styles mid-answer, which produces confident but incoherent conclusions.
Rigor level is a knob, not a default. Light rigor misses edge cases. Formal rigor is slow and overkill for most questions. Standard is the sweet spot for everyday decisions.
A single reasoner commits to one path and defends it. A two-sided steelman versus skeptic prompt surfaces hidden assumptions and brittle steps. Use it for anything contested or high-stakes.
Related free tools
Specialized generators for specific tasks.
Chain-of-Thought Prompt Generator
Linear step-by-step reasoning scaffolds.
Tree-of-Thought Prompt Generator
Branching exploration with best-path selection.
DeepSeek Prompt Generator
Reasoning-model tuned prompts for R1 and V3.
Claude Prompt Generator
Claude-native XML-tagged reasoning prompts.
FAQ
Questions about reasoning prompts.
What is a reasoning prompt?+
A reasoning prompt is a prompt that asks the model to produce its intermediate logical steps, not just the final answer. It forces the model out of pattern-matching mode and into step-by-step inference. That lifts accuracy on hard problems and makes the answer easier to audit.
Which reasoning mode should I pick?+
Deductive for rule-based problems (if A implies B, and A holds, then B holds). Inductive for generalizing from observations. Abductive for finding the most likely explanation. Analogical for mapping a known domain onto a new one. Causal for tracing cause and effect. Counterfactual for exploring what breaks if a premise changes. Most decisions are abductive or causal in practice.
What does rigor level change?+
Rigor controls how many explicit steps the model must surface. Light does a quick sanity pass. Standard names steps and justifies them. Strict makes every premise and assumption explicit. Formal pushes the model toward proof-level structure with named invariants. Use Light for brainstorming, Standard for most work, Strict for high-stakes calls, and Formal only for mathematical or policy work that needs to withstand scrutiny.
Should I always show the reasoning or hide it?+
Show it when you need to audit or learn from the steps. Hide it when you trust the model and want a clean final answer. Answer first then reasoning after is useful for skim readers who want the headline before the details. For agent workflows you often want internal reasoning only so downstream steps see a clean output.
How is this different from chain-of-thought?+
Chain-of-thought is one specific reasoning strategy: linear step-by-step. This generator covers six modes and three rigor levels. Chain-of-thought is a subset of what you can produce here. For straightforward step-by-step work the chain-of-thought generator is the tighter fit. For anything else this one gives you more control.
Does the steelman versus skeptic format actually help?+
Yes, especially for decisions under uncertainty. A single reasoner tends to commit to one path and then defend it. Forcing the model into a two-sided dialogue surfaces hidden assumptions that a single pass misses. Research on adversarial prompting shows meaningful accuracy gains on ambiguous or contested questions.
Do reasoning prompts work with non-reasoning models?+
Yes. GPT-4o, Claude, and Gemini all handle structured reasoning prompts. The accuracy lift is bigger with reasoning models (DeepSeek-R1, o1, o3) but even base models produce better answers when you ask them to state premises and surface counterexamples.
Can I paste this into Claude or ChatGPT directly?+
Yes. The prompts are plain text with no model-specific syntax. Paste into Claude, ChatGPT, Gemini, DeepSeek, or any API. For Claude you can wrap the sections in XML tags if you want cleaner boundaries. For a Claude-native version use the Claude prompt generator.