Stock Video Creator - Powered by Wan 2.2 and SeeDance 1 Lite

Generate your own custom stock videos using Wan 2.2 and SeeDance 1 Lite


If you're looking for an API, here is a sample code in NodeJS to help you out.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 const axios = require('axios'); const api_key = "YOUR API KEY"; const url = "https://api.segmind.com/workflows/68a573c7c49c91c2edbbb834-v1"; const data = { Concept: "the user input string", Resolution: "16:9" // Options: 16:9, 9:16 }; axios.post(url, data, { headers: { 'x-api-key': api_key, 'Content-Type': 'application/json' } }).then((response) => { console.log(response.data); });
Response
application/json
1 2 3 4 5 { "poll_url": "<base_url>/requests/<some_request_id>", "request_id": "some_request_id", "status": "QUEUED" }

You can poll the above link to get the status and output of your request.

Response
application/json
1 2 3 4 { "Wan_2.2_output": "any user input string", "Seedance_1_Lite_output": "any user input string" }

Attributes


Conceptstr*

Resolutionstr*
Allowed values: 16:9, 9:16

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Stock Video generator - Powered by Wan 2.2 and SeeDance 1 Lite Text to Video models

We use the latest Text to Video models to generate custom stock video footage that can be used in your commercial projects. You can also choose between 16:9 and 9:16 resolutions to generate videos in landscape or portrait format, better for phone screens.

The only input required is a simple prompt to describe the concept. Example: "female influencer doing unboxing, showing new make up brushes" or "Crystal-clear mineral water splash with a drop of moisturizing serum in a blue cosmetic background.". With this context, the workflow generates two outputs, one for each model Wan 2.2 and SeeDance 1 Lite.

How does this workflow work

We use the input from the user and convert it into a prompt that is better understood by the video generation models. We use OpenAI's o4 model to think and come up with prompts that follow the prompt guidelines suggested by these models.

Once we have the prompt, we pass it to both the models. We then get two video outputs that the user can compare and use one or both in their commercial project. We use Wan 2.2 and SeeDance 1 Lite specifically to make sure that the generations are affordable. There are models that cost higher and output higher quality such as Veo 3 and SeeDance Pro.

Extend this workflow

  1. Use Topaz Video Upscale to add more details and increase the resolution of the video to 1080p or 4k.
  2. Add a image generation model and leverage Image to Video models control the first frame of the video.
  3. Use a model to generate background music. Google Lyra or Meta Music Gen are great models to generate this audio track. You can then use Video Audio merge node to merge the audio and video to a single file.

Cookie settings

We use cookies to enhance your browsing experience, analyze site traffic, and personalize content. By clicking "Accept all", you consent to our use of cookies.