PixelFlow allows you to use all these features
Unlock the full potential of generative AI with Segmind. Create stunning visuals and innovative designs with total creative control. Take advantage of powerful development tools to automate processes and models, elevating your creative workflow.
Segmented Creation Workflow
Gain greater control by dividing the creative process into distinct steps, refining each phase.
Customized Output
Customize at various stages, from initial generation to final adjustments, ensuring tailored creative outputs.
Layering Different Models
Integrate and utilize multiple models simultaneously, producing complex and polished creative results.
Workflow APIs
Deploy Pixelflows as APIs quickly, without server setup, ensuring scalability and efficiency.
Vidu AI – Reference-to-Video Model
What is Vidu AI Reference to Video?
Vidu AI Reference to Video is a powerful generative platform that transforms one or more reference images into high-quality, multi-shot animated videos. It ensures visual consistency of characters, objects, or environments throughout the sequence, guided by detailed text prompts for style, motion, and transitions. Ideal for creators, marketers, and storytellers who want precise control over animation from their reference imagery.
Key Features
- Upload single or multiple reference images for consistent multi-shot videos
- Maintain character, object, and environment consistency across shots
- Customize animation style, aspect ratio, and duration
- Advanced seed control (
seed
: 0–999999) for reproducible outputs - Optional background music toggle (
bgm
: true/false) - Adjustable movement amplitude (
movement_amplitude
: auto)
Best Use Cases
- Character Animations: Animate concept art or photos with consistent motion
- Marketing Clips: Generate product demos using reference images
- Storytelling: Create cinematic scenes with unified visual elements
- Social Media: Produce vertical reels or square videos from reference art
- E-Learning: Animate diagrams or instructional visuals into engaging videos
Prompt Tips and Output Quality
- Provide clear, descriptive prompts to guide style and animation
- Use high-resolution reference images for better texture and detail
- Choose appropriate aspect ratio for target platform (e.g., 16:9, 9:16)
- Fix the
seed
parameter to reproduce exact animation sequences - Enable or disable background music as needed (
bgm
) - Use
movement_amplitude: auto
for natural motion balance
FAQs
Q: How many reference images can I use?
A: You can upload one or multiple reference images to maintain consistency across different shots.
Q: Can I reproduce the same animation reliably?
A: Yes, by setting a fixed seed
value, you get consistent animations every time.
Q: Is background music added automatically?
A: No, background music is optional and can be toggled on or off via the bgm
parameter.
Q: What aspect ratios are supported?
A: Common presets like 16:9
, 1:1
, and 9:16
are supported for various platforms.
Other Popular Models
sdxl-img2img
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers

faceswap-v2
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training

sdxl1.0-txt2img
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software

sd2.1-faceswapper
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
