POST
javascript
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 const axios = require('axios'); const fs = require('fs'); const path = require('path'); async function toB64(imgPath) { const data = fs.readFileSync(path.resolve(imgPath)); return Buffer.from(data).toString('base64'); } const api_key = "YOUR API-KEY"; const url = "https://api.segmind.com/v1/sd1.5-outpaint"; const data = { "image": "toB64('https://segmind.com/image5.png')", "prompt": "streets in italy", "negative_prompt": "NONE", "scheduler": "DDIM", "num_inference_steps": 25, "img_width": 1024, "img_height": 1024, "scale": 1, "strength": 1, "offset_x": 256, "offset_y": 256, "guidance_scale": 7.5, "mask_expand": 8, "seed": 124567 }; (async function() { try { const response = await axios.post(url, data, { headers: { 'x-api-key': api_key } }); console.log(response.data); } catch (error) { console.error('Error:', error.response.data); } })();
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


imageimage *

Image to Segment


promptstr *

Prompt to render


negative_promptstr ( default: None )

Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'


schedulerenum:str ( default: DDIM )

Type of scheduler.

Allowed values:


num_inference_stepsint ( default: 25 )

Number of denoising steps.

min : 25,

max : 100


img_widthenum:int ( default: 1 )

Desired result image width

Allowed values:


img_heightenum:int ( default: 1 )

Desired result image Height

Allowed values:


scalefloat ( default: 0.2 )

Scale for classifier-free guidance

min : 0.1,

max : 10


strengthfloat ( default: 1 )

Strength controls how much the images can vary

min : 0.1,

max : 1


offset_xint ( default: 1 )

Offset of the init image on the horizontal axis from the left.

min : 0,

max : 1024


offset_yint ( default: 1 )

Offset of the init image on the vertical axis from the top.

min : 0,

max : 1024


guidance_scalefloat ( default: 7.5 )

Scale for classifier-free guidance

min : 0.1,

max : 25


mask_expandint ( default: 8 )

Mask Expansion in pixels uniformly in all four sides, this sometimes helps the model to achieve more seamless results.

min : 0,

max : 256


seedint ( default: -1 )

Seed for image generation.

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Stable Diffusion 1.5 Outpainting

Outpainting, also known as "generative fill", "Uncrop", or "Unlimited zoom", is the process of extending an image beyond its original borders, adding new elements in a consistent style or exploring new narrative paths. This model, with its unique capabilities, allows for the creation of surreal and expansive images, pushing the boundaries of traditional image generation.

On the technical side, Stable Diffusion 1.5 Outpainting employs a latent diffusion model that combines an autoencoder with a diffusion model trained in the autoencoder's latent space. The model uses an encoder to transform images into latent representations, with a relative downsampling factor of 8. Text prompts are processed through a ViT-L/14 text-encoder, and the non-pooled output of this encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. The model's loss is a reconstruction objective between the added noise to the latent and the prediction made by the UNet. The strength value, which denotes the amount of noise added to the output image, can be adjusted to produce more variation within the image.

It allows users to break free from the 1:1 aspect ratio limitation of many generative model images, offering the freedom to create larger scenes and expand landscapes. Despite its surreal default nature, the model provides the flexibility to adjust the level of surrealism based on the user's preference. Moreover, it doesn't increase the image size infinitely but pushes the original image deeper into the canvas, mimicking the way cameras work when you take a few steps back.

Stable Diffusion 1.5 Outpainting use cases

  1. Customized Digital Artwork: Artists can use the outpainting feature to create unique digital art pieces, expanding the canvas to add more elements and details. This can be particularly useful for creating panoramic landscapes or intricate scenes that require a larger canvas.

  2. Film and Animation: In the film and animation industry, the outpainting feature can be used to extend scenes or backgrounds, providing a cost-effective alternative to manual drawing or CGI. This can be especially useful for creating wide-angle shots or panoramic views.

  3. Advertising and Marketing: Marketers can use outpainting to adjust the aspect ratio of images to fit different advertising mediums. For instance, a square image can be outpainted to a landscape format for a billboard advertisement, or a portrait format for a mobile ad.

  4. Game Design: In the gaming industry, outpainting can be used to generate diverse and expansive game environments. This can help game designers to quickly create new levels or scenes, saving time and resources.

  5. Interior Design and Architecture: Outpainting can be used to visualize different design concepts or architectural plans. For example, an interior designer can use it to extend a room's image to see how it would look with additional elements or changes.

  6. Fashion and Apparel Design: Designers can use outpainting to extend the design of a piece of clothing or an accessory, allowing them to visualize the complete look and make necessary adjustments.

  7. Reimagining Historical or Classic Art: Artists can use outpainting to add a modern twist to historical or classic art pieces, extending the original artwork with new elements or styles.

Stable Diffusion 1.5 Outpainting license

The model is licensed under the Creative ML OpenRAIL-M license, a form of Responsible AI License (RAIL). This license prohibits certain use cases, including crime, libel, harassment, doxing, exploiting minors, giving medical advice, automatically creating legal obligations, producing legal evidence, and discrimination. However, users retain the rights to their generated output images and are free to use them commercially.