AI Prompt
Generator.
Free cross-model prompt builder. Works with GPT-5, Claude, Gemini, DeepSeek, Llama, Mistral, Grok.
Quick, standard, and deep formats. Pick the right depth for the task.
Describe what you want
3 prompt variations
Click Copy to useTask: [state the goal] Tone: Professional. Format: Plain prose. Return only the final output.
# ROLE You are an expert helping a user with the task below. # TASK [state the goal] # AUDIENCE & TONE Audience: a smart general reader. Tone: Professional. # OUTPUT Format: Plain prose. Return only the deliverable. No preamble, no trailing offers.
# CONTEXT You are being asked to produce a piece of work. Before writing, think about what makes output in this category excellent. # TASK [state the goal] # AUDIENCE A smart general reader. # TONE Professional # CONSTRAINTS No special constraints. # OUTPUT FORMAT Plain prose # PROCESS 1. Restate the goal in one sentence to confirm you have it right. 2. Name the two or three traits that separate great output from mediocre output for this task. 3. Produce the output. 4. Re-read it. If anything falls short of the traits you named, revise. # DELIVER Return only the final output (step 3 or step 4, whichever is final). Drop the intermediate reasoning.
Under the hood
Why one prompt can work across models.
Modern LLMs converge on similar instruction-following behavior. Role, task, tone, constraints, format. A prompt that names those fields explicitly works across GPT, Claude, Gemini, and DeepSeek without rewriting.
Prompt depth is a knob. Quick for easy tasks, standard for most work, deep for polished output. Matching depth to task difficulty prevents both under-specified prompts (vague outputs) and over-engineered prompts (wasted tokens).
A brief self-check loop lifts quality without adding much length. Naming the quality traits before drafting forces the model to internalize the bar instead of shipping the first plausible answer.
Related free tools
Specialized generators for specific tasks.
FAQ
Questions about cross-model AI prompting.
Why a generic AI prompt generator if you also have model-specific ones?+
Model-specific generators (ChatGPT, Claude, Gemini) lean into one model's strengths (XML tags for Claude, tool calling for GPT). A generic generator produces prompts that work well everywhere. Start here if you do not know which model will run the prompt, or if you want to test the same prompt across multiple models to compare outputs.
What is the difference between Quick, Standard, and Deep?+
Quick is a one-pass instruction, maybe 30 tokens. Use it for simple tasks. Standard wraps your task in role, audience, tone, and output format, usually 100 to 200 tokens. Use it for most work. Deep adds a reflect-and-revise loop where the model names what good looks like, drafts, re-reads, and revises. Use it for high-stakes output like marketing copy or executive briefs.
What should I put in the goal field?+
Describe the outcome specifically. Bad: 'write about our product'. Good: 'draft a 200-word landing page hero section for a B2B scheduling tool targeting operations managers at mid-market SaaS companies'. The more specific the goal, the less the model has to guess, and the less time you spend rewriting.
Does the tone setting really matter?+
Yes, especially for customer-facing content. Without an explicit tone, models default to generic professional-helpful which sounds like every other AI output. Picking Friendly, Technical, or Persuasive nudges the model toward a voice that fits your audience. For internal tools, tone matters less.
When should I pick JSON or XML output format?+
JSON if you are going to pipe the output into code or another prompt. XML if you want structured content you will read visually or pass to Claude for further processing. Markdown for human-readable documents. Plain prose for anything that ends up in a doc, email, or web page. The output format should match what happens next with the text.
Does targeting a specific model change the prompt?+
Slightly. The generator adds a note like '(Tuned for Claude.)' so the model knows it is the intended audience, which occasionally helps with tool-use and format conformance. For bigger model-specific tuning use our ChatGPT, Claude, Gemini, or DeepSeek generators instead.
Why does the deep prompt ask the model to reflect?+
Because without reflection the model commits to its first draft and delivers it. A brief self-check loop (name the quality traits, draft, re-read, revise) catches weak spots before they ship. It roughly doubles token count but lifts quality meaningfully for polished output.
Can I use these prompts in a team prompt library?+
Yes. The output is plain text with no dependencies, so you can paste it into a Notion doc, a prompt library tool, or version control. Replace any placeholders with your team's real context and save as a reusable template.