Stable Diffusion XL 0.9

The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software.

Playground

Try the model in real time below.

Depreciated! Please use our selection of newer models.
output image


Examples

Check out what others have created with Stable Diffusion XL 0.9
Example preview

a beautiful stack of rocks sitting on top of a beach, a picture, red black white golden colors, chakras, packshot, stock photo

seed: 2784983004guidance_scale: 8

API

If you're looking for an API, you can choose from your desired programming language.

POST
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 import requests import base64 # Use this function to convert an image file from the filesystem to base64 def image_file_to_base64(image_path): with open(image_path, 'rb') as f: image_data = f.read() return base64.b64encode(image_data).decode('utf-8') # Use this function to fetch an image from a URL and convert it to base64 def image_url_to_base64(image_url): response = requests.get(image_url) image_data = response.content return base64.b64encode(image_data).decode('utf-8') api_key = "YOUR_API_KEY" url = "https://api.segmind.com/v1/sdxl-txt2img" # Request payload data = { "prompt": "a beautiful stack of rocks sitting on top of a beach, a picture, red black white golden colors, chakras, packshot, stock photo", "negative_prompt": "asian, african, makeup, fake, cgi, 3d, doll, art, anime, cartoon, illustration, painting, digital, blur, sepia, b&w, unnatural, unreal, photoshop, filter, airbrush, poor anatomy, lr", "samples": 1, "scheduler": "dpmpp_sde_ancestral", "num_inference_steps": 25, "guidance_scale": 8, "strength": 1, "seed": 2784983004, "img_width": 768, "img_height": 768, "base64": False } headers = {'x-api-key': api_key} response = requests.post(url, json=data, headers=headers) print(response.content) # The response is the generated image
RESPONSE
image/jpeg
HTTP Response Codes
200 - OKImage Generated
401 - UnauthorizedUser authentication failed
404 - Not FoundThe requested URL does not exist
405 - Method Not AllowedThe requested HTTP method is not allowed
406 - Not AcceptableNot enough credits
500 - Server ErrorServer had some issue with processing

Attributes


promptstr ( default: 1 )

Prompt to render


negative_promptstr ( default: None )

Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'


samplesint ( default: 1 ) Affects Pricing

Number of samples to generate.

min : 1,

max : 10


schedulerenum:str ( default: 1 )

Type of scheduler.

Allowed values:


num_inference_stepsint ( default: 1 ) Affects Pricing

Number of denoising steps.

min : 10,

max : 40


guidance_scalefloat ( default: 1 )

Scale for classifier-free guidance

min : 1,

max : 15


strengthfloat ( default: 1 )

How much to transform the reference image


seedint ( default: 1 )

Seed for image generation.


img_widthint ( default: 1 ) Affects Pricing

Image resolution.

min : 512,

max : 1024


img_heightint ( default: 1 ) Affects Pricing

Image resolution.

min : 512,

max : 1024


base64boolean ( default: 1 )

Base64 encoding of the output image.

To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.


Pricing

Serverless Pricing

Buy credits that can be used anywhere on Segmind

$ 0.0015 /per second
FEATURES

PixelFlow allows you to use all these features

Unlock the full potential of generative AI with Segmind. Create stunning visuals and innovative designs with total creative control. Take advantage of powerful development tools to automate processes and models, elevating your creative workflow.

Segmented Creation Workflow

Gain greater control by dividing the creative process into distinct steps, refining each phase.

Customized Output

Customize at various stages, from initial generation to final adjustments, ensuring tailored creative outputs.

Layering Different Models

Integrate and utilize multiple models simultaneously, producing complex and polished creative results.

Workflow APIs

Deploy Pixelflows as APIs quickly, without server setup, ensuring scalability and efficiency.

Stable Diffusion XL 0.9

Stability AI has recently unveiled SDXL 0.9, marking a significant milestone in the evolution of the Stable Diffusion text-to-image suite of models. This latest iteration builds upon the success of the Stable Diffusion XL beta launched in April 2023, delivering a substantial enhancement in image and composition detail. The SDXL 0.9 model, despite its advanced capabilities, can be operated on a standard consumer GPU, making it a highly accessible tool for a wide range of users.

Delving into the technical aspects, the key factor propelling the advancement in composition for SDXL 0.9 is the considerable increase in parameter count over the beta version. The model boasts one of the largest parameter counts of any open-source image model, with a 3.5B parameter base model and a 6.6B parameter model ensemble pipeline. The model operates on two CLIP models, including one of the largest OpenCLIP models trained to date (OpenCLIP ViT-G/14). This enhances the model's processing power, enabling it to generate realistic imagery with greater depth and a higher resolution of 1024x1024. Despite its powerful capabilities, the model's system requirements are surprisingly modest, requiring only a modern consumer GPU, a Windows 10 or 11 or Linux operating system, 16GB RAM, and an Nvidia GeForce RTX 20 graphics card (or equivalent) with a minimum of 8GB of VRAM. This makes SDXL 0.9 a highly accessible and versatile tool for a wide range of users and applications.

It opens up a new realm of creative possibilities for generative AI imagery, with potential applications spanning films, television, music, and instructional videos, as well as design and industrial uses. The model also offers functionalities that go beyond basic text prompting, including image-to-image prompting, inpainting, and outpainting.

Stable Diffusion XL 0.9 use cases

  1. Character Creation (Visual Effects): Stable Diffusion can be used to create characters based on detailed prompts. This can be especially useful in visual effects.

  2. E-commerce: Stable Diffusion can be used to create different angles and backgrounds for a product, eliminating the need for extensive photoshoots.

  3. Image Editing: Stable Diffusion can be used for image editing, including in-painting, where you can change specific elements of an image. For example, changing the color of flowers in an image.

  4. Fashion: Stable Diffusion can be used to change the clothes that someone is wearing in a photo, creating a natural-looking transformation.

  5. Gaming: Stable Diffusion can be used for asset creation in gaming, potentially saving weeks or months of work for artists.

It's important to note that these are just a few of the many potential use cases for Stable Diffusion. The technology is highly versatile and can be applied in a wide range of industries.

Stable Diffusion XL 0.9 License

It is important to note that it is currently released under a non-commercial, research-only license. This means that its use is exclusively intended for research purposes. Commercial usage of the model is strictly prohibited under its terms of use. This restriction is in place to ensure that the model is used responsibly and ethically, and to allow for a period of rigorous testing and refinement before it is potentially opened up for broader applications. Users are strongly encouraged to familiarize themselves with the terms of use before accessing and using the model.

License

F.A.Q.

Frequently Asked Questions

Take creative control today and thrive.

Start building with a free account or consult an expert for your Pro or Enterprise needs. Segmind's tools empower you to transform your creative visions into reality.

Pixelflow Banner