If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/sdxl-inpaint"
# Request payload
data = {
"image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/outputs/sdxl_inpaint.jpeg"), # Or use image_file_to_base64("IMAGE_PATH")
"mask": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/outputs/sdxl_inpaint_mask.png"), # Or use image_file_to_base64("IMAGE_PATH")
"prompt": "A man with black sun glasses",
"negative_prompt": "bad quality, painting, blur",
"samples": 1,
"scheduler": "DDIM",
"num_inference_steps": 25,
"guidance_scale": 7.5,
"seed": 12467,
"strength": 0.9,
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Input Image.
Mask Image
Prompt to render
Prompts to exclude, eg. 'bad anatomy, bad hands, missing fingers'
Number of samples to generate.
min : 1,
max : 4
Type of scheduler.
Allowed values:
Number of denoising steps.
min : 20,
max : 100
Scale for classifier-free guidance
min : 1,
max : 25
Seed for image generation.
min : -1,
max : 999999999999999
Scale for classifier-free guidance
min : 0,
max : 0.99
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Unlock the full potential of generative AI with Segmind. Create stunning visuals and innovative designs with total creative control. Take advantage of powerful development tools to automate processes and models, elevating your creative workflow.
Gain greater control by dividing the creative process into distinct steps, refining each phase.
Customize at various stages, from initial generation to final adjustments, ensuring tailored creative outputs.
Integrate and utilize multiple models simultaneously, producing complex and polished creative results.
Deploy Pixelflows as APIs quickly, without server setup, ensuring scalability and efficiency.
Stable Diffusion XL Inpainting is a state-of-the-art model that represents the pinnacle of image inpainting technology. This model is a specialized variant of the renowned Stable Diffusion series, designed to seamlessly fill in and reconstruct parts of images with astonishing accuracy and detail. It's a transformative tool for artists, designers, and photo editors who require the highest fidelity in image restoration and manipulation.
SDXL Inpainting operates on a sophisticated neural network architecture that excels in understanding context and texture to perform inpainting tasks. It leverages a deep understanding of image composition to predict and regenerate missing or damaged portions of images, making them whole with a level of detail that rivals the original. The model's nuanced approach ensures that the inpainted areas blend indistinguishably with the surrounding pixels, maintaining the integrity of the artwork or photograph.
High-Fidelity Inpainting: Delivers exceptional quality inpainting, capable of handling complex textures and patterns.
Context-Aware Regeneration: Intuitively understands the surrounding image context to provide coherent and seamless inpainting results.
Advanced Neural Network: Built on the robust Stable Diffusion XL framework, ensuring reliability and performance.
Versatile Application: Ideal for a wide range of use cases, from restoring historical photographs to creating new art pieces.
Art Restoration: Enables artists and restorers to repair damaged artwork with results that respect the original creator's vision.
Photo Editing: Provides photographers with a powerful tool to remove unwanted elements or repair imperfections in images.
Creative Design: Assists designers in creating cohesive visual content, even when working with incomplete elements.
Research and Archiving: Offers a solution for archivists to restore aged or deteriorating photographic documents.
Entertainment Industry: Can be used in film and game development to refine visual assets or generate new content from existing materials.
SDXL Img2Img is used for text-guided image-to-image translation. This model uses the weights from Stable Diffusion to generate new images from an input image using StableDiffusionImg2ImgPipeline from diffusers
Monster Labs QrCode ControlNet on top of SD Realistic Vision v5.1
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.