If you're looking for an API, you can choose from your desired programming language.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import requests
import base64
# Use this function to convert an image file from the filesystem to base64
def image_file_to_base64(image_path):
with open(image_path, 'rb') as f:
image_data = f.read()
return base64.b64encode(image_data).decode('utf-8')
# Use this function to fetch an image from a URL and convert it to base64
def image_url_to_base64(image_url):
response = requests.get(image_url)
image_data = response.content
return base64.b64encode(image_data).decode('utf-8')
api_key = "YOUR_API_KEY"
url = "https://api.segmind.com/v1/sd3-med-img2img"
# Request payload
data = {
"prompt": "photo of a boy holding phone on table,3d pixar style",
"negative_prompt": "low quality,less details",
"image": image_url_to_base64("https://segmind-sd-models.s3.amazonaws.com/display_images/sd3-img2img-ip.jpg"), # Or use image_file_to_base64("IMAGE_PATH")
"num_inference_steps": 20,
"guidance_scale": 5,
"seed": 698845,
"samples": 1,
"strength": 0.7,
"sampler": "dpmpp_2m",
"scheduler": "sgm_uniform",
"base64": False
}
headers = {'x-api-key': api_key}
response = requests.post(url, json=data, headers=headers)
print(response.content) # The response is the generated image
Text prompt for image generation
Negative text prompt to avoid certain qualities
Input image
Number of inference steps for image generation
min : 1,
max : 100
Guidance scale for image generation
min : 1,
max : 20
Seed for random number generation
Number of samples to generate
Strength of the image transformation
min : 0,
max : 1
Sampler for the image generation process
Allowed values:
Scheduler for the image generation process
Allowed values:
Base64 encoding of the output image
To keep track of your credit usage, you can inspect the response headers of each API call. The x-remaining-credits property will indicate the number of remaining credits in your account. Ensure you monitor this value to avoid any disruptions in your API usage.
Unlock the full potential of generative AI with Segmind. Create stunning visuals and innovative designs with total creative control. Take advantage of powerful development tools to automate processes and models, elevating your creative workflow.
Gain greater control by dividing the creative process into distinct steps, refining each phase.
Customize at various stages, from initial generation to final adjustments, ensuring tailored creative outputs.
Integrate and utilize multiple models simultaneously, producing complex and polished creative results.
Deploy Pixelflows as APIs quickly, without server setup, ensuring scalability and efficiency.
Stable Diffusion 3 Medium is a cutting-edge AI tool that uses advanced image-to-image technology to transform one image into another. It's powered with 2 billion parameters, letting it generate top-tier, realistic images by processing an initial image and a text prompt.
Capabilities: High-quality image transformations with efficient resource management, allowing operation on consumer-grade GPUs. It also provides adjustable transformation strengths to fine-tune outputs.
Creators:The model was developed by Stability AI.
Training Data Info: The details of the training data remain undisclosed, but it uses large and diverse image datasets.
Technical Architecture: The core architecture is based on a Diffusion Transformer, allowing complex image transformations.
Strengths: Exceptional image transformation quality, with broad creative possibilities. It's also optimized for efficient performance.
Step-by-Step Guide:
Input Image: Click on the upload area, and upload an image in PNG, JPG, or GIF format, with a maximum resolution of 2048x2048 pixels.
Set the Prompt: Enter a descriptive text prompt in the field to guide the image transformation.
Seed: Optionally, set a seed value. Check the "Randomize Seed" box for unique outputs each time.
Strength: Adjust the 'Strength' parameter to control how much the generated image should follow the input image.
Negative Prompt: Enter text in the "Negative Prompt" field to specify what to avoid.
Set Advanced Parameters: Control the number of refinement steps with 'Inference Steps'. 'Guidance Scale' balances between the prompt and generating unique images. Choose the method for the diffusion process with 'Sampler'. Lastly, select the scheduling algorithm for the diffusion process with 'Scheduler'.
Generate: Click the "Generate" button to start the image generation process. The output image will appear once generation is complete.
Fooocus enables high-quality image generation effortlessly, combining the best of Stable Diffusion and Midjourney.
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training
The SDXL model is the official upgrade to the v1.5 model. The model is released as open-source software
Take a picture/gif and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training