API
If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
# Use this function to convert a list of image URLs to base64
def image_urls_to_base64(image_urls):
return [image_url_to_base64(url) for url in image_urls]
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/stable-diffusion-3-turbo-img2img"
# Request payload
data = {
"mode": "image-to-image",
"image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/sd3-turbo-i2i-input.jpg"), # Or use image_file_to_base64("IMAGE_PATH")
"prompt": "cyberpunk style frog, dark colors",
"strength": 1,
"output_format": "jpeg",
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Attributes
Type of mode.
Allowed values:
Input Image
Prompt to render
How much to transform the reference image
Output format.
Allowed values:
Base64 encoding of the output image.
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Stable Diffusion 3 Turbo Image to Image
SD3 Turbo Image to Image is a distilled variant of Stable Diffusion 3 designed for efficient, high-quality image generation. By focusing on a smaller, optimized model, SD3 Turbo reduces computational overhead while retaining the core functionality of SD3. Here are key features:
-
Few-Step Inference: The image generation process is condensed, leading to faster inference times compared to SD3.
-
Targeted edits:Â You can provide an existing image and use text prompts to specify the desired changes. This allows for edits like adding or modifying colors, or applying different artistic styles.
-
Versatility:Â It can be used for various image editing tasks, from simple tweaks to more creative manipulations.
Other Popular Models
fooocus
Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.

sdxl1.0-txt2img
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software

codeformer
CodeFormer is a robust face restoration algorithm for old photos or AI-generated faces.

sd2.1-faceswapper
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
