POST/v1/segmind-vega-rt-v1
1const axios = require('axios');
2
3
4const api_key = "YOUR API-KEY";
5const url = "https://api.segmind.com/v1/segmind-vega-rt-v1";
6
7const data = {
8  "prompt": "backlight, wilderness woman hunting in jungle hiding behind leaves, face paintings closeup face portrait, detailed eyes, nature documentary, dry skin, fuzzy skin, lens flare",
9  "num_inference_steps": 4,
10  "seed": 758143278,
11  "img_width": 1024,
12  "img_height": 1024,
13  "base64": false
14};
15
16(async function() {
17    try {
18        const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } });
19        console.log(response.data);
20    } catch (error) {
21        console.error('Error:', error.response.data);
22    }
23})();
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing
Expand

Attributes


promptstr *

Prompt to render


num_inference_stepsint ( default: 4 ) Affects Pricing

Number of denoising steps.

min : 4,

min : 10


seedint ( default: -1 )

Seed for image generation.

min : -1,

min : 999999999999999


img_widthenum:int ( default: 1024 ) Affects Pricing

Can only be 1024 for SDXL

Allowed values:


img_heightenum:int ( default: 1024 ) Affects Pricing

Can only be 1024 for SDXL

Allowed values:


base64boolean ( default: 1 )

Base64 encoding of the output image.

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Segmind-VegaRT - Latent Consistency Model (LCM) LoRA of Segmind-Vega

Segmind-VegaRT a distilled consistency adapter for Segmind-Vega that allows to reduce the number of inference steps to only between 2 - 8 steps.

Latent Consistency Model (LCM) LoRA was proposed in LCM-LoRA: A universal Stable-Diffusion Acceleration Module by Simian Luo, Yiqin Tan, Suraj Patil, Daniel Gu et al.

This model is the first base model showing real-time capabilities at higher image resolutions, but has its own limitations;

  1. The model is good at close up portrait images of humans but tends to do poorly on full body images.

  2. Full body images may show deformed limbs and faces.

  3. This model is an LCM-LoRA model, so negative prompt and guidance scale parameters would not be applicable.

  4. Since it is a small model, the variability is low and hence may be best used for specific use cases when fine-tuned.

We will be releasing more fine tuned versions of this model so improve upon these specified limitations.