If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/sdxl-txt2img"
# Request payload
data = {
"prompt": "a beautiful stack of rocks sitting on top of a beach, a picture, red black white golden colors, chakras, packshot, stock photo",
"negative_prompt": "asian, african, makeup, fake, cgi, 3d, doll, art, anime, cartoon, illustration, painting, digital, blur, sepia, b&w, unnatural, unreal, photoshop, filter, airbrush, poor anatomy, lr",
"samples": 1,
"scheduler": "dpmpp_sde_ancestral",
"num_inference_steps": 25,
"guidance_scale": 8,
"strength": 1,
"seed": 2784983004,
"img_width": 768,
"img_height": 768,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Prompt to render
Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'
Number of samples to generate.
min : 1,
max : 10
Type of scheduler.
Allowed values:
Number of denoising steps.
min : 10,
max : 40
Scale for classifier-free guidance
min : 1,
max : 15
How much to transform the reference image
Seed for image generation.
Image resolution.
min : 512,
max : 1024
Image resolution.
min : 512,
max : 1024
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Unlock the full potential of generative AI with Segmind. Create stunning visuals and innovative designs with total creative control. Take advantage of powerful development tools to automate processes and models, elevating your creative workflow.
Gain greater control by dividing the creative process into distinct steps, refining each phase.
Customize at various stages, from initial generation to final adjustments, ensuring tailored creative outputs.
Integrate and utilize multiple models simultaneously, producing complex and polished creative results.
Deploy Pixelflows as APIs quickly, without server setup, ensuring scalability and efficiency.
Stability AI has recently unveiled SDXL 0.9, marking a significant milestone in the evolution of the Stable Diffusion text-to-image suite of models. This latest iteration builds upon the success of the Stable Diffusion XL beta launched in April 2023, delivering a substantial enhancement in image and composition detail. The SDXL 0.9 model, despite its advanced capabilities, can be operated on a standard consumer GPU, making it a highly accessible tool for a wide range of users.
Delving into the technical aspects, the key factor propelling the advancement in composition for SDXL 0.9 is the considerable increase in parameter count over the beta version. The model boasts one of the largest parameter counts of any open-source image model, with a 3.5B parameter base model and a 6.6B parameter model ensemble pipeline. The model operates on two CLIP models, including one of the largest OpenCLIP models trained to date (OpenCLIP ViT-G/14). This enhances the model's processing power, enabling it to generate realistic imagery with greater depth and a higher resolution of 1024x1024. Despite its powerful capabilities, the model's system requirements are surprisingly modest, requiring only a modern consumer GPU, a Windows 10 or 11 or Linux operating system, 16GB RAM, and an Nvidia GeForce RTX 20 graphics card (or equivalent) with a minimum of 8GB of VRAM. This makes SDXL 0.9 a highly accessible and versatile tool for a wide range of users and applications.
It opens up a new realm of creative possibilities for generative AI imagery, with potential applications spanning films, television, music, and instructional videos, as well as design and industrial uses. The model also offers functionalities that go beyond basic text prompting, including image-to-image prompting, inpainting, and outpainting.
Character Creation (Visual Effects): Stable Diffusion can be used to create characters based on detailed prompts. This can be especially useful in visual effects.
E-commerce: Stable Diffusion can be used to create different angles and backgrounds for a product, eliminating the need for extensive photoshoots.
Image Editing: Stable Diffusion can be used for image editing, including in-painting, where you can change specific elements of an image. For example, changing the color of flowers in an image.
Fashion: Stable Diffusion can be used to change the clothes that someone is wearing in a photo, creating a natural-looking transformation.
Gaming: Stable Diffusion can be used for asset creation in gaming, potentially saving weeks or months of work for artists.
It's important to note that these are just a few of the many potential use cases for Stable Diffusion. The technology is highly versatile and can be applied in a wide range of industries.
It is important to note that it is currently released under a non-commercial, research-only license. This means that its use is exclusively intended for research purposes. Commercial usage of the model is strictly prohibited under its terms of use. This restriction is in place to ensure that the model is used responsibly and ethically, and to allow for a period of rigorous testing and refinement before it is potentially opened up for broader applications. Users are strongly encouraged to familiarize themselves with the terms of use before accessing and using the model.
SDXL ControlNet gives unprecedented control over text-to-image generation. SDXL ControlNet models Introduces the concept of conditioning inputs, which provide additional information to guide the image generation process
Best-in-class clothing virtual try on in the wild
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.