Gemini Prompt
Generator.
Build prompts tuned for Gemini 2.5 Pro, Flash, and Deep Think. Planning, long context, multimodal, and Workspace — all one click away.
Works in Gemini, Google AI Studio, Vertex AI, and the Gemini API. Free. No sign-up.
# ROLE You are acting as a Gemini assistant specialized in: Deep Research. # TASK [describe the task] # METHOD Use Deep Research mode. Start with a 3–5 step written plan, surface assumptions, then execute each step. Cite sources inline and flag conflicting evidence. Briefly think through the problem before answering. Show a compact reasoning trace (3–5 bullets) above your final answer. # OUTPUT FORMAT - Tone: Step-by-step with reasoning - Format: Structured markdown with H2 / H3 - If any required input is missing, ask ONE clarifying question before generating; otherwise proceed. # QUALITY BAR - Ground every claim in the provided material or your knowledge, clearly distinguishing between the two. - Flag any assumption you had to make. - If confidence is low, say so explicitly.
Paste into gemini.google.com, AI Studio, or the Gemini API. Everything is built in your browser — nothing is sent to our servers.
Prompt anatomy
Why prompts for Gemini need a different shape.
Gemini 2.5 Pro's reasoning jumps measurably when you ask for a written plan first. 'Plan, then execute' unlocks thinking the model otherwise skips.
Gemini has a 1M–2M token window. Stop summarizing — paste the full doc. Then force verbatim quotes so the answer stays anchored to the source.
If you're attaching images, audio, or video, say so up top. Gemini treats those attachments as primary evidence when you tell it to describe before inferring.
How to use it
From task to paste-ready in 30 seconds.
- 1.Pick a mode. Deep Research for multi-step investigations, Long Context for 100K+ token documents, Multimodal for image / audio / video, Data Analysis for spreadsheets, Coding for dev work, Workspace for Docs / Sheets / Gmail-ready output.
- 2.Describe the task. One or two sentences. The generator wraps your task in the right scaffolding automatically.
- 3.Choose reasoning depth. Fast for Flash. Balanced for everyday 2.5 Pro use. Deep Think when the answer matters and you can wait 30 extra seconds.
- 4.Copy and paste into gemini.google.com, AI Studio, or your Gemini API call. Everything happens locally — no data leaves your browser.
Related tools
Keep going with Gemini.
FAQ
Questions about Gemini prompts.
What makes a Gemini prompt different from a ChatGPT prompt?+
Three things. First, Gemini 2.5 Pro responds strongly to structured planning — asking for a 'plan first, then execute' workflow unlocks more thorough answers than ChatGPT-style single-turn prompts. Second, Gemini's context window (1M+ tokens) rewards including reference material directly in the prompt instead of summarizing it. Third, Gemini is tightly integrated with Google Workspace, so prompts that produce Docs-, Sheets-, or Gmail-compatible output are easier to action.
Does this work with Gemini 2.5 Pro, Flash, and Deep Think?+
Yes. The 'Balanced' reasoning level targets Gemini 2.5 Pro's default behavior. 'Deep Think' explicitly asks for the extended reasoning mode available in 2.5 Pro. 'Fast' is optimized for Flash and Flash-Lite when latency and cost matter. The prompt structure is compatible with all three and also works in AI Studio and the Gemini API.
Can I use the output in Gems or custom Gemini apps?+
Yes. The generated prompt can be dropped into a Gem's instructions, a Vertex AI Agent, or a Gemini API system instruction verbatim. For Gems, we recommend stripping the 'TASK' section and keeping ROLE + METHOD + OUTPUT FORMAT + QUALITY BAR as the persistent instruction.
Why does 'Long Context' mode ask for verbatim quotes?+
Models with large context windows can hallucinate confidently when paraphrasing buried information. Forcing the model to quote the exact phrase it's drawing from turns the response into something you can fact-check in one Ctrl-F, instead of trusting a summary.
What's the difference between Deep Research mode and Deep Think reasoning?+
Deep Research mode is about the task — multi-step investigation, cite sources, flag conflicts. Deep Think reasoning is about how the model thinks — longer chain-of-thought, plan-before-execute, explicit tradeoff analysis. They pair well: Deep Research + Deep Think is the most rigorous setting for high-stakes questions.
Does this generator send my inputs anywhere?+
No. The generator runs entirely in your browser. Your task description, context, and preferences never leave your device — we don't log, store, or transmit them. You bring your own Gemini account.
Can I use the generated prompt in ChatGPT or Claude instead?+
Yes — the structure (Role, Task, Method, Output Format, Quality Bar) is model-agnostic. You'll lose the Gemini-specific Deep Think and Workspace tuning, but the scaffolding still produces better outputs in other models than an unstructured prompt.