Luma Modify Video
Transform videos seamlessly with high-fidelity generative edits while preserving original actor performances.
API
If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import requests
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/modify-video"
# Prepare data and files
data = {}
files = {}
data['mode'] = "adhere_1"
data['prompt'] = "woman in a yarn doll style"
# For parameter "video_url", you can send a raw file or a URI:
# files['video_url'] = open('IMAGE_PATH', 'rb') # To send a file
data['video_url'] = 'https://segmind-resources.s3.amazonaws.com/input/c08771b9-b671-4c12-9ea7-af4048b9d194-894a8bdf-6064-40ea-a78d-06c1abff262b.mp4' # To send a URI
# For parameter "first_frame_url", you can send a raw file or a URI:
# files['first_frame_url'] = open('IMAGE_PATH', 'rb') # To send a file
data['first_frame_url'] = 'https://segmind-resources.s3.amazonaws.com/input/f2220449-e53a-40d5-aba7-e12c6f562ab5-modify-video-ip.png' # To send a URI
headers = {'x-api-key': api_key}
# If no files, send as JSON
if files:
response = requests.post(url, data=data, files=files, headers=headers)
else:
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Attributes
How closely the output should follow the source video. Adhere: very close, for subtle enhancements. Flex: allows more stylistic change while keeping recognizable elements. Reimagine: loosely follows the source, for dramatic or transformative changes.
Allowed values:
Guides video modification.
The source video URL. Use short MP4s under 30 seconds. Maximum video size is 100mb.
An optional URL of the first frame of the video. This should be a modified version of the original first frame, it will be used to guide the video modification.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Resources to get you started
Everything you need to know to get the most out of Luma Modify Video
Guide to Using the Modify Video Model Effectively
Modify Video by Luma AI empowers creators with high-fidelity generative edits—no reshoots required. This guide walks you through selecting the right parameters for common tasks, crafting prompts, and leveraging first-frame guidance to unlock the model’s full potential.
1. Core Parameters
- •video_url (required): Short MP4 under 30 s.
- •mode (optional): Controls edit strength.
- •adhere_1/2/3: Faithful, subtle edits
- •flex_1/2/3: Balanced transformations
- •reimagine_1/2/3: Creative, bold overhauls
- •prompt (optional): Natural-language instruction (e.g., “make it look like film noir”).
- •first_frame_url (optional): Stylized reference image to anchor overall look.
2. Parameter Sets by Use Case
- •
Scene Replacement
- •mode:
adhere_2
(preserves actor motion) - •prompt: “swap background to a neon cityscape”
- •first_frame_url: A clean plate of target environment
- •mode:
- •
Creative Restyling
- •mode:
reimagine_1
orreimagine_2
- •prompt: “vintage 1970s film grain and warm tones”
- •first_frame_url: High-contrast retro photo
- •mode:
- •
Lip-Sync Corrections
- •mode:
adhere_1
- •prompt: “correct lip movements to match audio”
- •(No first_frame_url needed unless regrading style)
- •mode:
- •
Color Grading & Lighting
- •mode:
flex_2
- •prompt: “cinematic teal-orange grade with soft contrast”
- •first_frame_url: Frame with ideal color palette
- •mode:
3. Crafting Effective Prompts
- •Be Specific: “add neon lights” vs. “brighten scene”
- •Combine Attributes: “cinematic lighting, high contrast, film grain”
- •Iterate: Tweak adjectives (“soft”, “warm”, “dramatic”) to refine mood.
4. Leveraging First-Frame Guidance
Supplying a custom first frame anchors the model’s style across the clip. Use a reference image that exemplifies:
- •Target color palette
- •Texture (grain, brushstrokes)
- •Lighting quality
Example:
"first_frame_url": "https://.../vintage_frame.png"
5. Iterating Variants
Run multiple passes with different modes to compare results:
- •adhere_3 vs. flex_1 for side-by-side subtle vs. moderate
- •flex_3 vs. reimagine_1 to push creative limits
- •Keep prompt constant to isolate mode impact
6. Best Practices
- •Keep clips under 30 s for faster turnaround.
- •Start with default
adhere_1
for safety, then push toward flex/reimagine. - •Monitor temporal consistency—use shorter test clips before full renders.
- •Save parameter presets for repeatable workflows.
By following these guidelines, you can tailor Modify Video to any editing scenario—whether you need precise continuity or an artistic revolution. Happy editing!
Other Popular Models
Discover other models you might be interested in.
sdxl-img2img
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers

faceswap-v2
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training

sdxl1.0-txt2img
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software

sd1.5-majicmix
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.
