Runway Prompt
Generator.
Free builder for Runway Gen-3 Alpha. Text-to-video or image-to-video, with subject, environment, motion, and style tuned for Runway's conventions.
Three formats: clean paragraph, section-tagged, image-to-video. All Gen-3 Alpha ready.
Describe what you want
3 prompt variations
Click Copy to use[describe your subject] in city street at night. Camera pushes in. Cinematic live action. 6 seconds clip.
SUBJECT: [describe your subject] ENVIRONMENT: City street at night MOTION: Camera pushes in STYLE: Cinematic live action DURATION: 6 seconds Notes: smooth motion only, no whip cuts. Subject identity consistent from first to last frame.
# IMAGE-TO-VIDEO PROMPT (Gen-3 Alpha) Starting image: [describe your subject] in city street at night. # MOTION INSTRUCTION Camera pushes in. Smooth, continuous motion across the full 6 seconds. # STYLE Cinematic live action. Consistent lighting. No cuts.
Under the hood
Why Runway rewards motion separation.
Subject identity drifts fastest in Gen-3 Alpha when the description is vague. Lock it down with a distinctive visual anchor — color, garment, pose — that the model can track frame-to-frame.
Gen-3 treats motion as a separate control. Stating 'subject walks toward camera' explicitly produces animated output. Implied motion ('a person in a scene') often produces a still image that merely pans or zooms.
Environment is your lighting and palette anchor. A tight environment clause (rainy city alley, empty beach at dusk) ties the model to a visual genre and keeps backgrounds from morphing.
Related free tools
Specialized generators for specific tasks.
Sora Prompt Generator
OpenAI Sora with shot-list and director formats.
Veo 3 Prompt Generator
Google Veo 3 with native audio and dialogue fields.
AI Video Prompt Generator
Cross-model video prompts for any text-to-video tool.
Stable Diffusion Prompt Generator
Build the still first, then convert to video via image-to-video.
FAQ
Questions about Runway prompting.
Does this generate Runway Gen-3 Alpha prompts?+
Yes. The output follows Gen-3 Alpha conventions — tight subject clause, explicit motion instruction, environment anchor, style tag, and duration. The image-to-video variant is structured for Runway's image-input workflow.
What's the difference between text-to-video and image-to-video prompts?+
Text-to-video generates from scratch — the prompt has to describe the full scene, subject, and motion. Image-to-video starts from a still you upload, so the prompt focuses purely on motion and what should change across frames. The two require different phrasing.
How long should a Runway clip be?+
Gen-3 Alpha supports 5-10 seconds per generation. Sweet spot is 6-8 seconds — long enough for motion to develop, short enough to avoid temporal drift. For longer sequences, generate multiple clips and edit together.
Why does the generator separate motion from scene?+
Runway's model treats motion as a separate control signal. Mixing 'a woman walking through a rainy alley' into one long clause often produces a static woman in a rainy alley. Separating it — 'Subject: woman. Motion: walks toward camera.' — forces the model to animate.
How do I prevent subject morphing across frames?+
Keep the subject description tight and distinctive. 'A woman in a red trench coat under a neon umbrella' holds up across frames better than 'a stylish person outside.' For critical identity preservation, use image-to-video with a reference still.
Can I add camera motion and subject motion in the same prompt?+
Yes, but be explicit. 'Subject walks toward camera while camera pulls back' gives a clear dolly-zoom effect. Ambiguous prompts like 'moving through the scene' produce inconsistent results. Pick one camera motion and one subject motion, state both.
Does the output work in Luma, Pika, Sora, or Veo?+
The scene and motion body transfers well. Luma Dream Machine and Pika accept similar structure. For Sora and Veo 3, use our dedicated generators — they have model-specific conventions (duration format, audio fields) that matter.
Why the constraint on 'no whip cuts'?+
Gen-3 Alpha occasionally interprets motion prompts as multi-shot sequences. Explicitly saying 'no cuts, continuous motion' in the notes section prevents the model from producing a two-shot edit when you wanted a single continuous take.