Midjourney Prompt
Generator.
Free v6 prompt builder. Pick subject, style, lighting, lens, and aspect ratio — get 3 copy-ready prompts with proper parameter flags.
Outputs follow Midjourney v6.1 and niji 6 conventions. Paste straight into Discord after /imagine.
Describe what you want
3 prompt variations
Click Copy to useA detailed character portrait of [describe your subject], Cinematic style, golden hour --ar 3:2 --v 6.1
A detailed character portrait of [describe your subject], Cinematic style, golden hour, ultra-detailed, depth of field, 8k, award-winning photography, trending on ArtStation, volumetric atmosphere, crisp subject focus --ar 3:2 --v 6.1 --s 250
A detailed character portrait of [describe your subject], Cinematic style, golden hour, sharp subject, negative space composition, editorial tone, minimal distraction, clean background, strong silhouette, subtle grain, color-graded --ar 3:2 --v 6.1 --style raw --stylize 100
Under the hood
Why scene-first beats adjective-chains.
State what the image is as a scene, not as a mood. 'A detective smoking under a neon awning at 2am' produces sharper output than 'moody noir vibes.' Verbs and nouns anchor the model.
Pair a style (cinematic, oil painting, anime) with a lens or medium (85mm f/1.4, 35mm film Portra 400, medium format). The camera phrase forces the model into photographic optics and composition.
--ar controls the canvas. --v / --niji selects the model family. --stylize controls aesthetic strength (lower = literal, higher = painterly). --style raw keeps the model closer to your prompt for commercial work.
Related free tools
Specialized generators for specific tasks.
Stable Diffusion Prompt Generator
Positive + negative prompt, CFG, sampler hints for SDXL and SD 3.
Flux Prompt Generator
Black Forest Labs Flux — prompt-adherence weighted output.
AI Art Prompt Generator
Model-agnostic art prompts across medium, era, and style.
Design Prompt Generator
Midjourney, DALL-E, Stable Diffusion, Figma, Runway.
FAQ
Questions about Midjourney prompting.
Is this Midjourney prompt generator free?+
Yes. No login, no rate limit, no paywall. You can generate as many prompts as you want. The generator runs entirely in your browser — nothing is sent to a server.
Does it produce correct Midjourney v6 syntax?+
Yes. Each prompt ends with proper parameters — --ar for aspect ratio, --v 6.1 or --niji 6 for version, and optional --stylize or --style raw flags. Paste the output directly into Discord after /imagine.
What's the difference between the 3 variants?+
Variant 1 is a clean baseline — subject, style, lighting, camera. Variant 2 adds cinematic density, higher stylize, and depth-of-field language for richer output. Variant 3 uses --style raw and --stylize 100 for editorial, commercial, or product work where you want the model to stay closer to your description.
Why does the generator tell me to be concrete?+
Midjourney rewards specific nouns and verbs. 'A lone astronaut on a cracked ice plain' produces tighter output than 'a sad person on ice.' If you find yourself writing adjective chains, rewrite as a scene a director would describe to a set designer.
Should I use --v 6.1 or niji 6?+
Use --v 6.1 for photorealism, cinematic shots, product, and architecture. Use --niji 6 for anime, manga, and stylized character art. Niji is trained on illustrated and animated references, so it handles line-art and cell shading far better than the base model.
What aspect ratio should I pick?+
3:2 for photography and landscapes. 16:9 for cinematic and wallpapers. 4:5 for social feeds. 9:16 for TikTok, Reels, and Stories. 1:1 for logos, avatars, and album art. 21:9 for ultra-wide establishing shots.
Do I need to add a negative prompt?+
Midjourney v6 accepts --no to exclude things (e.g. --no text, logos, watermark). The generator doesn't auto-add this because over-using --no can degrade output. Add it manually if you keep seeing unwanted elements.
Can I use these prompts outside Midjourney?+
The descriptive body (before the --flags) transfers well to Stable Diffusion, Flux, and other image models. Just strip the Midjourney-specific parameters. For the best results on each model, use our dedicated Stable Diffusion and Flux generators.