API
If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
# Use this function to convert a list of image URLs to base64
def image_urls_to_base64(image_urls):
return [image_url_to_base64(url) for url in image_urls]
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/seedance-v1-lite-image-to-video"
# Request payload
data = {
"image_url": "https://segmind-resources.s3.amazonaws.com/input/73a8dadb-95e6-4bdc-a3ae-a4cd041b99a0-b572c379-3eb8-4c67-893c-b08662f9d11f.jpeg",
"duration": 5,
"prompt": "Generate a serene forest scene at dawn with birds chirping, sunlight filtering through trees, and a gentle mist.",
"resolution": "720p",
"seed": 12345
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Attributes
Provide a URL to the image for relighting. Use S3 URLs for consistent results.
Set video length in seconds. Opt for 5 seconds for quick previews, 10 for detailed scenes.
Allowed values:
Detail the animation scene vividly. E.g., A sunset beach scene with waves lapping, people playing, sky turning orange.
Choose video clarity; 720p for most uses, 480p for faster processing.
Allowed values:
Define a seed for consistent outputs. Any number 1-999999 suffices.
min : 1,
max : 999999
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Seedance 1.0 – Text-to-Video Model
What is Seedance 1.0?
Seedance 1.0 is an advanced generative AI model built to convert text and image inputs into rich, dynamic videos. Designed for developers, creators, and product managers, Seedance delivers smooth, stable motion and cinematic detail at up to 720p resolution. With native multi-shot storytelling, it maintains visual and thematic consistency across multiple coherent camera angles. Whether you’re orchestrating multi-agent action sequences or crafting a single panoramic sweep, Seedance interprets complex natural language prompts and diverse artistic styles—from photorealism to cyberpunk illustration.
Key Features
- Native multi-shot storytelling: Automatically sequences multiple camera angles into a coherent narrative.
- Prompt fidelity: Accurately follows detailed instructions for scene composition, character interactions, and camera movements.
- Style versatility: Switch between photorealism, stylized illustration, cyberpunk, and other artistic directions.
- Input flexibility: Support for text descriptions and a source image URL (must be an S3 URI).
- Configurable parameters:
• prompt (required): Vivid scene details and action cues.
• image_url (required): S3 link for reference or relighting.
• duration (required): 5 or 10 seconds.
• resolution (advanced): 480p or 720p.
• seed (optional): Integer for reproducible outputs.
Best Use Cases
- Marketing and social media videos: Craft eye-catching ads with cinematic flair.
- Storyboarding and prototyping: Generate quick scene mockups for client reviews.
- Game cinematics and cutscenes: Produce dynamic action sequences with camera sweeps.
- Virtual events and presentations: Animate product demos, tutorials, or explainer clips.
- Creative concept art: Explore mood boards in motion, from dawn-lit forests to neon-lit cityscapes.
Prompt Tips and Output Quality
- Be specific: Describe setting, time of day, and key actions (“sunset beach with rolling waves, close-up on surfers”).
- Define camera moves: Use terms like “pan,” “dolly,” or “track” to guide multi-shot sequences.
- Mention style and lighting: “Cyberpunk street scene, neon reflections on wet asphalt.”
- Choose duration wisely: 5 seconds for quick previews, 10 seconds for detailed storytelling.
- Use a seed value for consistency across multiple runs.
By combining clear scene descriptions with parameter tuning, you’ll maximize frame stability, prompt adherence, and cinematic quality.
FAQs
Q: What inputs does Seedance 1.0 accept?
A: A structured text prompt and an S3 image URL. You can also set duration, resolution, and an optional seed.
Q: How do I maintain consistency across shots?
A: Use a detailed prompt structure and the same seed to ensure visual coherence.
Q: What resolution options are available?
A: Seedance 1.0 supports 480p for fast previews and 720p for higher clarity.
Q: Can I control camera movements?
A: Yes—specify “pan left,” “dolly in,” or “wide shot to close-up” directly in your prompt.
Q: Which artistic styles are supported?
A: From photorealism to stylized illustration and cyberpunk. Combine style keywords with scene details.
Other Popular Models
illusion-diffusion-hq
Monster Labs QrCode ControlNet on top of SD Realistic Vision v5.1

sd1.5-majicmix
The most versatile photorealistic model that blends various models to achieve the amazing realistic images.

codeformer
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.

sd2.1-faceswapper
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
