1const axios = require('axios');
4const api_key = "YOUR API-KEY";
5const url = "";
7const data = {
8  "prompt": "cinematic film still, 4k, realistic, ((cinematic photo:1.3)) of panda wearing a blue spacesuit, sitting in a bar, Fujifilm XT3, long shot, ((low light:1.4)), ((looking straight at the camera:1.3)), upper body shot, somber, shallow depth of field, vignette, highly detailed, high budget Hollywood movie, bokeh, cinemascope, moody, epic, gorgeous, film grain, grainy",
9  "negative_prompt": "ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, blurry, bad anatomy, blurred, watermark, grainy, signature, cut off, draft",
10  "style": "base",
11  "samples": 1,
12  "scheduler": "UniPC",
13  "num_inference_steps": 25,
14  "guidance_scale": 8,
15  "strength": 0.2,
16  "high_noise_fraction": 0.8,
17  "seed": 468685,
18  "img_width": 1024,
19  "img_height": 1024,
20  "refiner": true,
21  "base64": false
24(async function() {
25    try {
26        const response = await, data, { headers: { 'x-api-key': api_key } });
27        console.log(;
28    } catch (error) {
29        console.error('Error:',;
30    }
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing


promptstr *

Prompt to render

negative_promptstr ( default: None )

Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'

styleenum:str ( default: base )

Styles for Stable Diffusion.

Allowed values:

samplesint ( default: 1 ) Affects Pricing

Number of samples to generate.

min : 1,

min : 4

schedulerenum:str ( default: UniPC )

Type of scheduler.

Allowed values:

num_inference_stepsint ( default: 25 ) Affects Pricing

Number of denoising steps.

min : 20,

min : 100

guidance_scalefloat ( default: 7.5 )

Scale for classifier-free guidance

min : 1,

min : 25

strengthfloat ( default: 0.2 )

How much to transform the reference image

min : 0.1,

min : 1

high_noise_fractionfloat ( default: 0.8 )

Number of inference steps to be run on each expert

min : 0,

min : 1

seedint ( default: -1 )

Seed for image generation.

min : -1,

min : 999999999999999

img_widthenum:int ( default: 1024 ) Affects Pricing

Can only be 1024 for SDXL

Allowed values:

img_heightenum:int ( default: 1024 ) Affects Pricing

Can only be 1024 for SDXL

Allowed values:

refinerboolean ( default: true )

If yes, improves the quality of the output. Note: Does not work when high noise fraction is 1.

base64boolean ( default: 1 )

Base64 encoding of the output image.

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Stable Diffusion SDXL 1.0

Stable Diffusion SDXL 1.0, a product of Stability AI, is a groundbreaking development in the realm of image generation. It's a quantum leap from its predecessor, Stable Diffusion 1.5 and 2.1, boasting superior advancements in image and facial composition. This revolutionary tool leverages a latent diffusion model for text-to-image synthesis, rendering it an essential asset in the visual arts landscape in 2023. The real magic lies in its ability to create descriptive images from succinct prompts and generate words within images, setting a new standard for AI-generated visuals.

In terms of its technical architecture, SDXL deploys a larger UNet backbone, housing more attention blocks and an extended cross-attention context, thanks to its second text encoder. SDXL operates a mixture-of-experts pipeline for latent diffusion. It first uses the base model to generate noisy latents, which are then refined for the final denoising steps. SDXL also employs a two-stage pipeline with a high-resolution model, applying a technique called SDEdit, or "img2img", to the latents generated from the base model, a process that enhances the quality of the output image but may take a bit more time. It was trained at 1024x1024 resolution images vs. 512x512 for SD 1.5 version.

It outperforms its predecessors and stands tall among current state-of-the-art image generators. The model exhibits significant improvements in visual fidelity, rendering stunning visuals and realistic aesthetics. The introduction of a refinement model has been a game-changer, improving the quality of the output generated by SDXL. The training on multiple aspect ratios contributes to the versatility of SDXL, making it a preferred tool in diverse visual settings.

Stable Diffusion SDXL 1.0 use cases

  1. Art and Design: Create stunning visuals and graphics for digital media.

  2. Marketing and Advertising: Generate attention-grabbing imagery for campaigns.

  3. Entertainment and Gaming: Develop detailed graphics for video games and interactive content.

  4. Education: Simplify complex concepts with easy-to-understand visuals.

  5. Research: Visualize data and research findings for better comprehension.

Stable Diffusion SDXL 1.0 license

As for licensing, the Stable Diffusion SDXL 1.0 operates under the OpenRail++ license. While not traditionally classified as open source, this license is comprehensive and accommodative for a wide variety of uses. It allows for the distribution, sublicensing, and commercial utilization of the model, thereby promoting its widespread adoption. This makes it a versatile tool, encouraging innovation while upholding the rights of the creators.