Wan Animate
Wan-Animate seamlessly animates characters and replaces subjects in videos, ensuring fluid realism and environmental consistency.
API
If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
# Use this function to convert a list of image URLs to base64
def image_urls_to_base64(image_urls):
return [image_url_to_base64(url) for url in image_urls]
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/wan-animate"
# Request payload
data = {
"input_video": "https://segmind-resources.s3.amazonaws.com/input/e9f1b4cb-812d-43b4-9aaf-572bc01828d1-animate-1.mp4",
"reference_image": image_url_to_base64("https://segmind-resources.s3.amazonaws.com/input/7af018fd-d18a-4688-9afd-52df7510fe69-MarkuryFLUX_03641_.png"), # Or use image_file_to_base64("IMAGE_PATH")
"resolution": "480p",
"prompt": "woman posing for a selfie",
"seed": 987778,
"mode": "replace",
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Attributes
URL of the video to process. Use HD videos for high-quality outputs.
URL for image reference. Use clear images for best rendering.
Output video resolution. Choose 720p for detail and 480p for speed.
Allowed values:
Text directing animation. Use vivid descriptions for creative animations.
Set seed for repeatability. Random seeds give variety.
'Animation' for animating image; 'Replace' for replacing video subject. Choose based on task requirement.
Allowed values:
Outputs video as base64 string for easy data transfer.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Resources to get you started
Everything you need to know to get the most out of Wan Animate
# Wan-Animate: Effective Usage Guide
Wan-Animate is a unified character animation and replacement model that delivers high-fidelity results. Follow this guide to optimize your settings, improve output quality, and select the right parameters for various scenarios.
## 1. Input & Reference Quality
- **Input Video**: Use HD footage (at least 720p). Good lighting and clear, steady shots improve motion tracking.
- **Reference Image**: Supply a well-lit, high-resolution image. Front-facing portraits with neutral expressions yield better facial feature mapping.
## 2. Mode & Resolution Settings
Choose between two primary modes and three output resolutions:
- **mode**:
- `animation`: Drive a pre-existing character with custom motions.
- `replace`: Swap an on-screen subject with your reference character.
- **resolution**:
- `720p` (1280Ă720): Best for professional-quality animations.
- `576p` (1024Ă576): Balance between speed and detail.
- `480p` (854Ă480): Fastest processing for drafts or quick previews.
## 3. Prompt Crafting
Your `prompt` steers the animation style and context:
- Be specific: âA joyful dance in a neon-lit alleyâ instead of âdance.â
- Include environment cues: ârainy city street,â âsunset park.â
- Mention character posture or emotion: âgraceful leap,â âsurprised reaction.â
## 4. Seed & Reproducibility
- **seed** (optional, advanced): Assign an integer (e.g., `12345`) to fix randomness.
⢠Use the same seed for consistent outputs across runs.
⢠Omit or vary seeds for creative diversity.
## 5. Use-Case Parameter Presets
1. **Social Media Content**
⢠mode: `animation`
⢠resolution: `480p`
⢠seed: leave blank (varied clips)
⢠prompt: âfast-paced dance loop in an urban parkâ
2. **Game Development Prototypes**
⢠mode: `animation`
⢠resolution: `576p`
⢠seed: `2024`
⢠prompt: âheroic running cycle through a ruined castleâ
3. **Marketing & Advertisements**
⢠mode: `replace`
⢠resolution: `720p`
⢠seed: `42`
⢠prompt: âsmiling spokesperson presenting eco-friendly productâ
4. **Virtual Reality Experiences**
⢠mode: `animation` or `replace`
⢠resolution: `720p`
⢠seed: `1001`
⢠prompt: âinteractive handshake with glowing futuristic backgroundâ
## Pro Tips for Best Results
- Always test with a 5â10-second clip before full-length renders.
- For environmental consistency, leverage the built-in Relighting LoRA.
- When sharing outputs over APIs, enable `base64` (advanced) to receive direct data URIs.
By following these recommendations, youâll unlock the full potential of Wan-Animate and produce lifelike, engaging character animations or seamless replacements every time.```
Other Popular Models
Discover other models you might be interested in.
sdxl-img2img
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers

sadtalker
Audio-based Lip Synchronization for Talking Head Video

illusion-diffusion-hq
Monster Labs QrCode ControlNet on top of SD Realistic Vision v5.1

sdxl-inpaint
This model is capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask
