1const axios = require('axios');
4const api_key = "YOUR API-KEY";
5const url = "";
7const data = {
8  "prompt": "a beautiful stack of rocks sitting on top of a beach, a picture, red black white golden colors, chakras, packshot, stock photo",
9  "negative_prompt": "asian, african, makeup, fake, cgi, 3d, doll, art, anime, cartoon, illustration, painting, digital, blur, sepia, b&w, unnatural, unreal, photoshop, filter, airbrush, poor anatomy, lr",
10  "samples": 1,
11  "scheduler": "dpmpp_sde_ancestral",
12  "num_inference_steps": 25,
13  "guidance_scale": 8,
14  "strength": 1,
15  "seed": 2784983004,
16  "img_width": 768,
17  "img_height": 768,
18  "base64": false
21(async function() {
22    try {
23        const response = await, data, { headers: { 'x-api-key': api_key } });
24        console.log(;
25    } catch (error) {
26        console.error('Error:',;
27    }
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing


promptstr ( default: 1 )

Prompt to render

negative_promptstr ( default: None )

Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'

samplesint ( default: 1 ) Affects Pricing

Number of samples to generate.

min : 1,

min : 10

schedulerenum:str ( default: 1 )

Type of scheduler.

Allowed values:

num_inference_stepsint ( default: 1 ) Affects Pricing

Number of denoising steps.

min : 10,

min : 40

guidance_scalefloat ( default: 1 )

Scale for classifier-free guidance

min : 1,

min : 15

strengthfloat ( default: 1 )

How much to transform the reference image

seedint ( default: 1 )

Seed for image generation.

img_widthint ( default: 1 ) Affects Pricing

Image resolution.

min : 512,

min : 1024

img_heightint ( default: 1 ) Affects Pricing

Image resolution.

min : 512,

min : 1024

base64boolean ( default: 1 )

Base64 encoding of the output image.

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.

Stable Diffusion XL 0.9

Stability AI has recently unveiled SDXL 0.9, marking a significant milestone in the evolution of the Stable Diffusion text-to-image suite of models. This latest iteration builds upon the success of the Stable Diffusion XL beta launched in April 2023, delivering a substantial enhancement in image and composition detail. The SDXL 0.9 model, despite its advanced capabilities, can be operated on a standard consumer GPU, making it a highly accessible tool for a wide range of users.

Delving into the technical aspects, the key factor propelling the advancement in composition for SDXL 0.9 is the considerable increase in parameter count over the beta version. The model boasts one of the largest parameter counts of any open-source image model, with a 3.5B parameter base model and a 6.6B parameter model ensemble pipeline. The model operates on two CLIP models, including one of the largest OpenCLIP models trained to date (OpenCLIP ViT-G/14). This enhances the model's processing power, enabling it to generate realistic imagery with greater depth and a higher resolution of 1024x1024. Despite its powerful capabilities, the model's system requirements are surprisingly modest, requiring only a modern consumer GPU, a Windows 10 or 11 or Linux operating system, 16GB RAM, and an Nvidia GeForce RTX 20 graphics card (or equivalent) with a minimum of 8GB of VRAM. This makes SDXL 0.9 a highly accessible and versatile tool for a wide range of users and applications.

It opens up a new realm of creative possibilities for generative AI imagery, with potential applications spanning films, television, music, and instructional videos, as well as design and industrial uses. The model also offers functionalities that go beyond basic text prompting, including image-to-image prompting, inpainting, and outpainting.

Stable Diffusion XL 0.9 use cases

  1. Character Creation (Visual Effects): Stable Diffusion can be used to create characters based on detailed prompts. This can be especially useful in visual effects.

  2. E-commerce: Stable Diffusion can be used to create different angles and backgrounds for a product, eliminating the need for extensive photoshoots.

  3. Image Editing: Stable Diffusion can be used for image editing, including in-painting, where you can change specific elements of an image. For example, changing the color of flowers in an image.

  4. Fashion: Stable Diffusion can be used to change the clothes that someone is wearing in a photo, creating a natural-looking transformation.

  5. Gaming: Stable Diffusion can be used for asset creation in gaming, potentially saving weeks or months of work for artists.

It's important to note that these are just a few of the many potential use cases for Stable Diffusion. The technology is highly versatile and can be applied in a wide range of industries.

Stable Diffusion XL 0.9 License

It is important to note that it is currently released under a non-commercial, research-only license. This means that its use is exclusively intended for research purposes. Commercial usage of the model is strictly prohibited under its terms of use. This restriction is in place to ensure that the model is used responsibly and ethically, and to allow for a period of rigorous testing and refinement before it is potentially opened up for broader applications. Users are strongly encouraged to familiarize themselves with the terms of use before accessing and using the model.