Luma’s Dream Machine democratized AI video with one simple insight: camera motion is as important as content. But when you need that motion to be precise—to track a subject, to loop seamlessly, to extend a perfect moment—you need more than text. You need JSON.
I remember the first time I tried to create a perfectly looping 4K product showcase with the Dream Machine. I spent hours typing variations of "slowly orbit a glass bottle." Sometimes it worked; most times, the camera just jerked around like it was held by a caffeinated squirrel. I felt that familiar frustration—knowing exactly what I wanted but being unable to "speak" the model's language.
The breakthrough came when I looked under the hood at the luma json prompt structure. By switching from text prompts to structured API calls, I stopped "prompting and praying" and started directing. JSON is the secret language of the Ray 2 engine, allowing you to lock in keyframes, define camera choreography, and ensure your characters remain consistent across every frame.
What are Concepts? In Luma's JSON ecosystem, Concepts are pre-defined motion and
style keywords (like handheld, dolly_zoom, or cinematic)
that the model has been explicitly trained to recognize. Instead of describing a "shaky camera,"
you simply pass the handheld concept ID in your JSON for 100% reliable results.
Section 1: Luma's Architecture: Camera-Centric Design
Luma AI treats the camera as the primary creative input. While other models focus on describing the pixels, Luma focuses on describing the lens.
The evolution from the original Dream Machine to Ray 2 (and the ultra-fast Ray 2 Flash) has expanded the JSON schema significantly. We now have native 4K resolution, 9-second durations, and—most importantly—mathematical keyframe control.
Luma Evolution Comparison
| Feature | Dream Machine (Legacy) | Ray 2 / Ray 2 Flash |
|---|---|---|
| Max Duration | 5 Seconds | 9 Seconds |
| Max Resolution | 720p | 4K (Native) |
| Keyframe Support | None (Text only) | Dual Keyframes (Start & End) |
| Looping | Manual stitching | Native loop: true parameter |
| Consistency | Visual guesswork | concepts & Character Reference |
Section 2: Ray 2 JSON Deep Dive
To truly master the luma ray 2 prompt format, you have to think in sequences.
The Anatomy of a Pro-Level Luma JSON
keyframes: This allows you to defineframe0(the start) andframe1(the end).loop: A simple boolean that handles the complex math of end-to-start frame blending.concepts: An array of technical IDs that override the AI's general "creativity" with specific directorial intent.
{
"model": "ray-2",
"prompt": "A futuristic sports car speeding through a neon tunnel, cinematic lighting.",
"resolution": "4k",
"duration": "9s",
"keyframes": {
"frame0": { "type": "image", "url": "https://cdn.yoursite.com/car_start.jpg" },
"frame1": { "type": "image", "url": "https://cdn.yoursite.com/car_end.jpg" }
},
"loop": true,
"concepts": [ { "key": "push_in" }, { "key": "low_angle" } ]
}
Instead of hunting for the correct URL syntax, use our JSON Prompt Generator PWA. It provides a visual dashboard where you "click to direct" and export validated JSON instantly.
Section 3: Beginner Workflows – From Text to JSON
If you are just starting your journey with luma video generation json, the easiest way to begin is with Image-to-Video (I2V).
Step-by-Step Selection
- Select your Hero Image: This becomes your
frame0. - Describe the Action: Instead of "a cat," try "a fluffy cat jumping over a fence."
- Choose your Resolution: 1080p for social media; 4K for professional work.
Luma requires specific ratios. If your keyframe image is 16:9 but your JSON says 9:16, your video will be distorted. Our PWA tool features an Automatic Aspect Ratio Detector to prevent this.
Section 4: Advanced Techniques – Chaining & Choreography
The real "pro" move in Luma is Generation Chaining. Use a previous generation ID as a keyframe to build infinite narratives.
How to Chain
To create a 27-second story, generate three 9-second clips where the frame0 of clip #2
is the id of clip #1.
Camera Motion Reference
| Motion Key | JSON Concept ID | Visual Effect |
|---|---|---|
| Orbit | orbit_left / orbit_right |
Circular movement around the subject |
| Crane | pedestal_up / pedestal_down |
Vertical camera lift |
| Dolly | push_in / pull_out |
Moving actual camera toward/away |
| Pan | pan_left / pan_right |
Rotating the camera horizontally |
Section 5: PWA Tool Integration for Luma Workflows
Our JSON Prompt Generator PWA acts as your digital camera crew. Manual JSON construction takes roughly 15 minutes; with the tool, it's done in less than 60 seconds.
- Visual Keyframe Manager: Drag and drop frames to see a "ghost" preview of the movement.
- Camera Motion Builder: Select multiple motions (like Orbit + Crane) and let the tool write the complex JSON.
- Loop Validator: Ensures your
loop: trueparameter is correctly applied. - Batch Export: Generate sequences for multiple aspect ratios in one click.
Ready to direct your next masterpiece?
Try the Luma-Ready JSON Generator in our PWA for 100% Free while we are still in Beta.
Try it Now →Conclusion
Luma’s strength is cinematic camera motion, and JSON is the remote control. As we move from simple prompt-based AI into professional, directed AI video, mastering the luma json prompt structure is the baseline for high-quality production.