AI Prompt
Optimiser.
Free tool to tighten, restructure, and rewrite an existing prompt. Three rewrite formats, model-agnostic.
Paste your current prompt. Pick what you want to improve. Get a sharper version in seconds.
Describe what you want
3 prompt variations
Click Copy to use# ROLE You are a senior prompt engineer reviewing a prompt for rewrite. # ORIGINAL PROMPT """ [paste your current prompt] """ # OPTIMISATION TARGET Make it more specific Preserve rule: preserve all original instructions. # TASK 1. Critique the original in 3 to 5 bullets. Focus on what blocks great output. 2. Rewrite the prompt to fix the issues while meeting the preserve rule. 3. Explain what changed and why in 2 to 3 bullets. # OUTPUT Three sections: "Critique", "Rewritten prompt", "Changes". Rewritten prompt must be in a fenced code block.
# TASK Produce a diff-style rewrite of the prompt below. # ORIGINAL [paste your current prompt] # OPTIMISATION GOAL Make it more specific # OUTPUT FORMAT For each change, show: - BEFORE: (the original fragment) - AFTER: (the rewritten fragment) - WHY: (one sentence) End with a "Clean rewritten prompt" section showing the full rewrite. Preserve all original instructions.
# TASK Rewrite the prompt below three ways: minimal fix, restructured, and deeply optimised. # ORIGINAL [paste your current prompt] # OPTIMISATION GOAL Make it more specific Preserve all original instructions. # OUTPUT Produce three versions: 1. Minimal fix: the smallest change that addresses the primary issue. 2. Restructured: same intent, cleaner structure (sections, role, format). 3. Deep optimisation: add role, context, examples, verification, output format. For each version, note the token cost and what it unlocks.
Under the hood
Why mediocre prompts are fixable.
Most underperforming prompts are too vague. 'Write a summary' becomes 'Write a 150-word summary for a CFO, highlight only metric changes, no methodology'. Specificity is the single biggest lever.
Old prompts accumulate instructions that no longer matter. Optimisation surfaces redundant rules so you can cut them. Every dropped sentence is a cost saving on every future call.
A wall of text pushes the model to guess. Sectioning the prompt (role, task, format, constraints) produces more reliable output and makes future edits easier. Structure beats verbosity.
Related free tools
Specialized generators for specific tasks.
FAQ
Questions about prompt optimisation.
What does the prompt optimiser do?+
You paste your current prompt, pick what you want to improve (shorten, tighten, restructure, convert format), and the optimiser produces a meta-prompt you can run in any LLM to get a rewritten version. The rewrite is done by the LLM you run it in, so you keep control over the model and the cost.
Why not just ask ChatGPT to improve my prompt directly?+
You can, but untargeted requests like 'improve this' produce bland rewrites. The optimiser formats a structured rewrite request with an explicit goal (specificity, length, format conversion), preservation rules, and output structure. That produces sharper, more useful rewrites than a generic 'fix this' ask.
Which rewrite format should I pick?+
Critique + rewrite shows you what was wrong before showing the fix, useful for learning. Diff-style (before/after/why) is best when you want line-by-line clarity on what changed. Three variants (minimal, restructured, deep) lets you compare effort-to-quality tradeoffs and pick the level you actually need.
What does "preserve all original instructions" mean?+
It constrains the rewrite. If your original prompt has quirky but important instructions ('always sign off with our tagline'), you do not want the optimiser to silently drop them. The preserve rule tells the LLM to keep every instruction unless it is clearly redundant. Use 'drop redundant requirements' when you suspect the original is bloated.
Does the optimiser work on prompts in other languages?+
Yes. LLMs handle cross-language prompt optimisation fine. Paste a Spanish or Japanese prompt and the optimiser produces the meta-instruction in English (because that is its UI language) but the target LLM will return the rewritten prompt in whatever language you specify or in the original language by default.
Should I run the optimiser on system prompts and long documents?+
Yes, but be aware of token cost. Long system prompts are exactly where optimisation pays off the most (every prompt call pays the token cost). Run the optimiser, pick the tightest variant that preserves the original behavior, and test empirically before deploying.
How often should I re-optimise a prompt?+
After any meaningful change in behavior (new task variation, new output format, new model). Model upgrades also shift what works. When GPT-4 came out, prompts optimised for GPT-3.5 suddenly needed rewrites. Re-optimise when output quality drops or when switching models.
Does this replace prompt engineers?+
No, but it closes the gap. A good prompt engineer produces better rewrites than any tool. The optimiser gets you most of the way on straightforward prompts, so a prompt engineer can focus on the genuinely hard cases (agent loops, multi-turn behavior, edge case handling).